[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-ChenHongruixuan--ChangeDetectionRepository":3,"tool-ChenHongruixuan--ChangeDetectionRepository":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":10,"env_os":93,"env_gpu":93,"env_ram":93,"env_deps":94,"category_tags":97,"github_topics":98,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":105,"updated_at":106,"faqs":107,"releases":133},777,"ChenHongruixuan\u002FChangeDetectionRepository","ChangeDetectionRepository","This repository contains some python code of some traditional change detection methods or provides their original websites, such as SFA, MAD, and some deep learning-based change detection methods, such as SiamCRNN, DSFA, and some FCN-based methods. ","ChangeDetectionRepository 是一个专注于遥感图像变化检测的开源代码库，汇集了多种主流算法的 Python 实现。内容涵盖 CVA、SFA、MAD 等传统统计方法，以及 SiamCRNN、DSFA 等基于深度学习的模型，其中不少属于无监督学习范畴。这一项目有效解决了研究者在复现论文算法或搭建基准测试时面临的高编码成本问题，让大家能更专注于算法改进与实验分析。此外，内置的部分多时相数据集进一步降低了数据准备门槛。无论是从事遥感解译的科研人员，还是希望快速验证想法的算法开发者，都能在此找到所需资源。其独特优势在于同时兼顾传统方法与深度神经网络，为不同技术背景的用户提供了灵活选择，成为探索变化检测领域的高效起点。","# Change Detection Repository\nIn this repository, we provide python implementation of some traditional change detection methods, such as SFA, MAD, some deep learning-based change detection methods, such as SiamCRNN, DSFA, and FCN-based methods, or their original websites. Some [multi-temporal datasets](https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FChangeDetectionRepository\u002Ftree\u002Fmaster\u002FDataset) are also contained in this repository. We would be very glad if this repository can provide some help to your research in change detection or remote sensing image interpretation.\n\n\n## Traditional Methods\n### Change Vector Analysis (CVA)\nChange vector analysis (CVA) [1] is a most commonly used method, which can provide change intensity and change direction. \n\n### Slow Feature Analysis (SFA)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_efc20fba0139.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nWu et al. [2] proposed a novel CD method based on slow feature analysis (SFA), which aims to find the most invariant component in multitemporal images to highlight changed regions. In addition to change detection, SFA was also used in radiometric correction [3] and scene change detection [4]. This reporisty contains the Python implementation of SFA and iterative SFA. The MATLAB implementation can be founded in http:\u002F\u002Fsigma.whu.edu.cn\u002Fresource.php. \n\n### Multivariate Alteration Detection (MAD)\nMAD is a change detection algorithm based on canonical correlation analysis (CCA) that aims to maximize the variance of projection feature difference. For the detailed introduction about MAD, please refer to [5] and [6]. This reporisty contains the python implementation of MAD. The MATLAB implementation can be founded in http:\u002F\u002Fwww.imm.dtu.dk\u002F~alan\u002Fsoftware.html. \n\n### PCA-Kmeans\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_6d9877c3cd22.png\" width=\"50%\" height=\"50%\">\u003C\u002Fdiv>\nPCA-Kmeans [12] partitones the difference image into nonoverlapping blocks. Orthonormal eigenvectors are extracted through PCA of nonoverlapping block set to create an eigenvector space. Each pixel in the difference image is represented with an S-dimensional feature vector which is the projection difference image data onto the generated eigenvector space. The change detection is achieved by partitioning the feature vector space into two clusters using k-means. \n\n## Deep Learning Methods\n### Deep Slow Feature Analysis (DSFA)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_da0a47f64deb.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nDSFA is an unsupervised change detection model that utilizes a dual-stream deep neural network to learn non-linear features and highlights changes via linear SFA. For the detailed introduction about DSFA, please refer to [7]. The Tensorflow implementation of DSFA can be founded in https:\u002F\u002Fgithub.com\u002Frulixiang\u002FDSFANet or http:\u002F\u002Fsigma.whu.edu.cn\u002Fresource.php. \n\n### Deep Siamese Convolutional Multiple-Layers Recurrent Neural Network (SiamCRNN)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_debdf68f37c3.png\" width=\"70%\" height=\"70%\">\u003C\u002Fdiv>\nSiamCRNN is an end-to-end general multi-source change detection architecture that consists of three subnetworks: deep siamese convolutional neural network (DSCNN), multiple-layers RNN (MRNN), and fully connected (FC) layers. The DSCNN has a flexible structure for multisource image and is able to extract spatial–spectral features from homogeneous or heterogeneous VHR image patches. The MRNN stacked by long-short term memory (LSTM) units is responsible for mapping the spatial–spectral features extracted by DSCNN into a new latent feature space and mining the change information between them. In addition, FC, the last part of SiamCRNN, is adopted to predict change probability. For the detailed introduction about DSFA, please refer to [8]. The Tensorflow implementation of SiamCRNN can be founded in https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FSiamCRNN.\n\n### Deep Kernel PCA Convolutional Mapping Network (KPCA-MNet)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_7eec7f7c51a9.png\" width=\"70%\" height=\"70%\">\u003C\u002Fdiv>\nKPCA-MNet is designed for unsupervised binary and multi-class change detection in very-high-resolution images. In the KPCA-MNet, the high-level spatial-spectral feature maps are extracted by a deep siamese network consisting of weight-shared KPCA convolutional layers. Then, the change information in the feature difference map is mapped into a 2-D polar domain. Finally, the change detection results are generated by threshold segmentation and clustering algorithms. For the detailed introduction about DSFA, please refer to [9]. The Python implementation can be founded in https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FKPCAMNet. \n\n### Deep Siamese Multi-scale Convolutional Neural Network\nIn iterature [14] and [15], a multi-scale feature convolution unit (MFCU) is adopted for change detection in multi-temporal VHR images. MFCU can extract multi-scale spatial-spectral features in the same layer. Based on the unit two novel deep siamese convolutional neural networks, called as deep siamese multi-scale convolutional network (DSMS-CN) and deep siamese multi-scale fully convolutional network (DSMS-FCN), are designed for unsupervised and supervised change detection, respectively. Tensorflow implementation of this work can be founded in https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FDSMSCN.\n\n### SARPCANet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_88b7663644e3.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nSARPCANet utilizes Gabor wavelets and FCM as the pre-classification method to select training samples [10], and then trains a PCANet [11] model with the selected image patches. The original MATLAB implementation could be founded in https:\u002F\u002Fgithub.com\u002Fsummitgao\u002FSAR_Change_Detection_GarborPCANet. \n\n### FDCNN\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_cff699f09158.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nFDCNN [13] uses scene-level samples of remote sensing scene classification for learning deep features from different remote sensing scenes at different scales. Then, a new CNN structure and training strategies are proposed for remote sensing image change detection, which is supervised but requires very few pixel-level training samples.The original Caffe implementation could be founded in https:\u002F\u002Fgithub.com\u002FMinZHANG-WHU\u002FFDCNN. \n\n### DCVA\nDCVA [16] processes pre-change and post-change images through a pre-trained network and extracts bi-temporal deep features for subsequent processing in CD framework. The original Caffe implementation could be founded in https:\u002F\u002Fgithub.com\u002Fsudipansaha\u002FdcvaVHROptical. \n\n### CorrFusionNet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_d7fcb39e39a2.jpg\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nCorrFusionNet [17] is a unified network called CorrFusionNet for scene change detection. The CorrFusionNet firstly extracts the features of the bi-temporal inputs with deep convolutional networks. Then the extracted features will be projected into a lower dimension space to computed the instance level canonical correlation. The cross-temporal fusion will be performed based on the computed correlation in the CorrFusion module. In the objective function, the authors introduced a new formulation for calculating the temporal correlation. The original Tensorflow implementation could be founded in https:\u002F\u002Fgithub.com\u002Frulixiang\u002FCorrFusionNet. \n\n### SNUNet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_f9f32915bc02.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nSNUNet-CD [18] is a densely connected siamese network for change detection, namely SNUNet-CD (the combination of Siamese network and NestedUNet). SNUNet-CD alleviates the loss of localization information in the deep layers of neural network through compact information transmission between encoder and decoder, and between decoder and decoder. In addition, Ensemble Channel Attention Module (ECAM) is proposed for deep supervision. The original pytorch implementation could be founded in https:\u002F\u002Fgithub.com\u002Flikyoo\u002FSiam-NestedUNet. \n\n\n## Other Change Detection Repository\nThere also exist some other change detection repositories, you can visit them through below links:  \n[1] https:\u002F\u002Fgithub.com\u002FBobholamovic\u002FChangeDetectionToolbox  \n[2] https:\u002F\u002Fgithub.com\u002FMinZHANG-WHU\u002FChange-Detection-Review  \n[3] https:\u002F\u002Fgithub.com\u002Fwenhwu\u002Fawesome-remote-sensing-change-detection\n\n\n\n## Reference\n[1] F. Bovolo and L. Bruzzone, “A Theoretical Framework for Unsupervised Change Detection Based on Change Vector Analysis in the Polar Domain,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 1, pp. 218–236, 2007.  \n[2] C. Wu, B. Du, and L. Zhang, “Slow feature analysis for change detection in multispectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 5, pp. 2858–2874, 2014.  \n[3] L. Zhang, C. Wu, and B. Du, “Automatic radiometric normalization for multitemporal remote sensing imagery with iterative slow feature analysis,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 10, pp. 6141–6155, 2014.  \n[4] C. Wu, L. Zhang, and B. Du, “Kernel Slow Feature Analysis for Scene Change Detection,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 4, pp. 2367–2384, 2017.  \n[5] A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration detection (MAD) and MAF Postprocessing in multispectral, bitemporal image data: New approaches to change detection studies,” Remote Sens. Environ., vol. 64, pp. 1–19, 1998.  \n[6] A. A. Nielsen, “The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data,” IEEE Trans. Image Process., vol. 16, no. 2, pp. 463–478, 2007.  \n[7] B. Du, L. Ru, C. Wu, and L. Zhang, “Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 12, pp. 9976–9992, 2019.  \n[8] H. Chen, C. Wu, B. Du, L. Zhang, and L. Wang, “Change Detection in Multisource VHR Images via Deep Siamese Convolutional Multiple-Layers Recurrent Neural Network,” IEEE    Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2848–2864, 2020.  \n[9] C. Wu,  H. Chen, B. Do, and L. Zhang, “Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network,” arXiv preprint arXiv:1912.08628, 2019. https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08628v1.  \n[10] F. Gao, J. Dong, B. Li, and Q. Xu, “Automatic Change Detection in Synthetic Aperture Radar Images Based on PCANet,” IEEE Geosci. Remote Sens. Lett., vol. 13, no. 12, pp. 1792–1796, 2016.  \n[11] T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A Simple Deep Learning Baseline for Image Classification?,” IEEE Trans. Image Process., vol. 24, no. 12, pp. 5017–5032, 2015.  \n[12] T. Celik, “Unsupervised change detection in satellite images using principal component analysis and K-means clustering,” IEEE Geosci. Remote Sens. Lett., vol. 6, no. 4, pp. 772–776, 2009.  \n[13] M. Zhang and W. Shi, “A Feature Difference Convolutional Neural Network-Based Change Detection Method,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 10, pp. 7232–7246, 2020.\n[14] H. Chen, C. Wu, B. Du and L. Zhang, \"Deep Siamese Multi-scale Convolutional Network for Change Detection in Multi-temporal VHR Images,\" 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 2019, pp. 1-4.  \n[15] H. Chen, C. Wu, B. Du and L. Zhang, \"Change Detection in Multi-temporal VHR Images Based on Deep Siamese Multi-scale Convolutional Neural Network,\" arXiv preprint arXiv:1912.08628, 2020. https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.11479.  \n[16] S. Saha, F. Bovolo, and L. Bruzzone, “Unsupervised deep change vector analysis for multiple-change detection in VHR Images,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 6, pp. 3677–3693, 2019.  \n[17] L. Ru, B. Du and C. Wu, \"Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion,\" in IEEE Transactions on Image Processing, doi: 10.1109\u002FTIP.2020.3039328.  \n[18] S. Fang, K. Li, J. Shao and Z. Li, \"SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images,\" in IEEE Geoscience and Remote Sensing Letters, doi: 10.1109\u002FLGRS.2021.3056416.  \n## Q & A\n**For any questions, please [contact us.](mailto:Qschrx@gmail.com)**\n","# 变化检测仓库\n在此仓库中，我们提供了一些传统变化检测方法（如 SFA、MAD）、一些基于深度学习的方法（如 SiamCRNN、DSFA）以及基于 FCN 的方法的 Python 实现，或者它们的原始网站。此仓库还包含一些 [多时相数据集](https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FChangeDetectionRepository\u002Ftree\u002Fmaster\u002FDataset)。如果本仓库能为您的变化检测或遥感图像解译研究提供帮助，我们将非常高兴。\n\n## 传统方法\n### 变化矢量分析 (CVA)\n变化矢量分析 (CVA) [1] 是一种最常用的方法，它可以提供变化强度和变化方向。 \n\n### 慢特征分析 (SFA)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_efc20fba0139.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nWu 等人 [2] 提出了一种基于慢特征分析 (SFA) 的新型变化检测 (CD) 方法，旨在找到多时相图像中最不变的成分以突出变化区域。除了变化检测外，SFA 还用于辐射校正 [3] 和场景变化检测 [4]。此仓库包含 SFA 和迭代 SFA 的 Python 实现。MATLAB 实现可在 http:\u002F\u002Fsigma.whu.edu.cn\u002Fresource.php 找到。 \n\n### 多元变化检测 (MAD)\nMAD 是一种基于典型相关分析 (CCA) 的变化检测算法，旨在最大化投影特征差异的方差。关于 MAD 的详细介绍，请参考 [5] 和 [6]。此仓库包含 MAD 的 Python 实现。MATLAB 实现可在 http:\u002F\u002Fwww.imm.dtu.dk\u002F~alan\u002Fsoftware.html 找到。 \n\n### PCA-Kmeans\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_6d9877c3cd22.png\" width=\"50%\" height=\"50%\">\u003C\u002Fdiv>\nPCA-Kmeans [12] 将差值图像划分为不重叠的块。通过对不重叠块集进行 PCA (主成分分析) 提取正交归一化特征向量，以创建特征向量空间。差值图像中的每个像素都用一个 S 维特征向量表示，该向量是将差值图像数据投影到生成的特征向量空间的结果。通过使用 k-means 将特征向量空间划分为两个簇来实现变化检测。 \n\n## 深度学习方法\n### 深度慢特征分析 (DSFA)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_da0a47f64deb.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nDSFA 是一种无监督变化检测模型，它利用双流深度神经网络学习非线性特征，并通过线性 SFA 突出变化。关于该方法的详细介绍，请参考 [7]。DSFA 的 Tensorflow 实现可在 https:\u002F\u002Fgithub.com\u002Frulixiang\u002FDSFANet 或 http:\u002F\u002Fsigma.whu.edu.cn\u002Fresource.php 找到。 \n\n### 深度孪生卷积多层循环神经网络 (SiamCRNN)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_debdf68f37c3.png\" width=\"70%\" height=\"70%\">\u003C\u002Fdiv>\nSiamCRNN 是一个端到端的通用多源变化检测架构，由三个子网络组成：深度孪生卷积神经网络 (DSCNN)、多层循环神经网络 (MRNN) 和全连接 (FC) 层。DSCNN 具有适用于多源图像的灵活结构，能够从同质或异质超高分辨率 (VHR) 图像块中提取空间 - 光谱特征。由长短期记忆 (LSTM) 单元堆叠而成的 MRNN 负责将 DSCNN 提取的空间 - 光谱特征映射到一个新的潜在特征空间，并挖掘它们之间的变化信息。此外，作为 SiamCRNN 最后一部分的 FC 被用来预测变化概率。关于该方法的详细介绍，请参考 [8]。SiamCRNN 的 Tensorflow 实现可在 https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FSiamCRNN 找到。\n\n### 深度核主成分分析卷积映射网络 (KPCA-MNet)\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_7eec7f7c51a9.png\" width=\"70%\" height=\"70%\">\u003C\u002Fdiv>\nKPCA-MNet 专为超高分辨率图像中的无监督二分类和多类变化检测而设计。在 KPCA-MNet 中，高级空间 - 光谱特征图由由权重共享的 KPCA 卷积层组成的深度孪生网络提取。然后，特征差图中的变化信息被映射到二维极域。最后，通过阈值分割和聚类算法生成变化检测结果。关于该方法的详细介绍，请参考 [9]。Python 实现可在 https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FKPCAMNet 找到。 \n\n### 深度孪生多尺度卷积神经网络\n在文献 [14] 和 [15] 中，采用多尺度特征卷积单元 (MFCU) 进行多时相 VHR 图像的变化检测。MFCU 可以在同一层中提取多尺度空间 - 光谱特征。基于该单元，设计了两种新型深度孪生卷积神经网络，分别称为深度孪生多尺度卷积网络 (DSMS-CN) 和深度孪生多尺度全卷积网络 (DSMS-FCN)，分别用于无监督和有监督变化检测。该工作的 Tensorflow 实现可在 https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FDSMSCN 找到。\n\n### SARPCANet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_88b7663644e3.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nSARPCANet 利用 Gabor 小波和 FCM (模糊 C 均值) 作为预分类方法来选择训练样本 [10]，然后使用选定的图像块训练 PCANet [11] 模型。原始 MATLAB 实现可在 https:\u002F\u002Fgithub.com\u002Fsummitgao\u002FSAR_Change_Detection_GarborPCANet 找到。 \n\n### FDCNN\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_cff699f09158.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nFDCNN [13] 使用遥感场景分类的场景级样本来学习不同尺度下不同遥感场景的深度特征。随后，提出了一种新的 CNN (卷积神经网络) 结构和训练策略用于遥感图像变化检测，它是监督式的，但只需要很少的像素级训练样本。原始 Caffe 实现可在 https:\u002F\u002Fgithub.com\u002FMinZHANG-WHU\u002FFDCNN 找到。 \n\n### DCVA\nDCVA [16] 通过预训练网络处理变化前和变化后的图像，并提取双时相深度特征以供 CD (变化检测) 框架中的后续处理。原始 Caffe 实现可在 https:\u002F\u002Fgithub.com\u002Fsudipansaha\u002FdcvaVHROptical 找到。\n\n### CorrFusionNet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_d7fcb39e39a2.jpg\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nCorrFusionNet [17] 是一个名为 CorrFusionNet 的统一网络，用于场景变化检测（scene change detection）。CorrFusionNet 首先使用深度卷积网络（deep convolutional networks）提取双时相输入（bi-temporal inputs）的特征。然后，提取的特征将被投影到较低维空间以计算实例级别的典型相关系数（canonical correlation）。基于计算出的相关性，将在 CorrFusion 模块中执行跨时相融合（cross-temporal fusion）。在目标函数（objective function）中，作者提出了一种新的公式来计算时相相关性。原始的 TensorFlow 实现可以在 https:\u002F\u002Fgithub.com\u002Frulixiang\u002FCorrFusionNet 找到。 \n\n### SNUNet\n\u003Cdiv align=center>\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_readme_f9f32915bc02.png\" width=\"60%\" height=\"60%\">\u003C\u002Fdiv>\nSNUNet-CD [18] 是一种用于变化检测的密集连接（densely connected）孪生网络（Siamese network），即 SNUNet-CD（Siamese network 与 NestedUNet 的结合）。SNUNet-CD 通过在编码器（encoder）与解码器（decoder）之间以及解码器与解码器之间进行紧凑的信息传输，减轻了神经网络深层中定位信息的损失。此外，提出了集成通道注意力模块（Ensemble Channel Attention Module, ECAM）用于深度监督（deep supervision）。原始的 PyTorch 实现可以在 https:\u002F\u002Fgithub.com\u002Flikyoo\u002FSiam-NestedUNet 找到。 \n\n\n## 其他变化检测仓库\n还存在一些其他的变 化检测仓库，你可以通过以下链接访问它们：  \n[1] https:\u002F\u002Fgithub.com\u002FBobholamovic\u002FChangeDetectionToolbox  \n[2] https:\u002F\u002Fgithub.com\u002FMinZHANG-WHU\u002FChange-Detection-Review  \n[3] https:\u002F\u002Fgithub.com\u002Fwenhwu\u002Fawesome-remote-sensing-change-detection\n\n\n\n## 参考文献\n[1] F. Bovolo and L. Bruzzone, “A Theoretical Framework for Unsupervised Change Detection Based on Change Vector Analysis in the Polar Domain,” IEEE Trans. Geosci. Remote Sens., vol. 45, no. 1, pp. 218–236, 2007.  \n[2] C. Wu, B. Du, and L. Zhang, “Slow feature analysis for change detection in multispectral imagery,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 5, pp. 2858–2874, 2014.  \n[3] L. Zhang, C. Wu, and B. Du, “Automatic radiometric normalization for multitemporal remote sensing imagery with iterative slow feature analysis,” IEEE Trans. Geosci. Remote Sens., vol. 52, no. 10, pp. 6141–6155, 2014.  \n[4] C. Wu, L. Zhang, and B. Du, “Kernel Slow Feature Analysis for Scene Change Detection,” IEEE Trans. Geosci. Remote Sens., vol. 55, no. 4, pp. 2367–2384, 2017.  \n[5] A. A. Nielsen, K. Conradsen, and J. J. Simpson, “Multivariate alteration detection (MAD) and MAF Postprocessing in multispectral, bitemporal image data: New approaches to change detection studies,” Remote Sens. Environ., vol. 64, pp. 1–19, 1998.  \n[6] A. A. Nielsen, “The regularized iteratively reweighted MAD method for change detection in multi- and hyperspectral data,” IEEE Trans. Image Process., vol. 16, no. 2, pp. 463–478, 2007.  \n[7] B. Du, L. Ru, C. Wu, and L. Zhang, “Unsupervised Deep Slow Feature Analysis for Change Detection in Multi-Temporal Remote Sensing Images,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 12, pp. 9976–9992, 2019.  \n[8] H. Chen, C. Wu, B. Du, L. Zhang, and L. Wang, “Change Detection in Multisource VHR Images via Deep Siamese Convolutional Multiple-Layers Recurrent Neural Network,” IEEE    Trans. Geosci. Remote Sens., vol. 58, no. 4, pp. 2848–2864, 2020.  \n[9] C. Wu,  H. Chen, B. Do, and L. Zhang, “Unsupervised Change Detection in Multi-temporal VHR Images Based on Deep Kernel PCA Convolutional Mapping Network,” arXiv preprint arXiv:1912.08628, 2019. https:\u002F\u002Farxiv.org\u002Fabs\u002F1912.08628v1.  \n[10] F. Gao, J. Dong, B. Li, and Q. Xu, “Automatic Change Detection in Synthetic Aperture Radar Images Based on PCANet,” IEEE Geosci. Remote Sens. Lett., vol. 13, no. 12, pp. 1792–1796, 2016.  \n[11] T. H. Chan, K. Jia, S. Gao, J. Lu, Z. Zeng, and Y. Ma, “PCANet: A Simple Deep Learning Baseline for Image Classification?,\" IEEE Trans. Image Process., vol. 24, no. 12, pp. 5017–5032, 2015.  \n[12] T. Celik, “Unsupervised change detection in satellite images using principal component analysis and K-means clustering,” IEEE Geosci. Remote Sens. Lett., vol. 6, no. 4, pp. 772–776, 2009.  \n[13] M. Zhang and W. Shi, “A Feature Difference Convolutional Neural Network-Based Change Detection Method,” IEEE Trans. Geosci. Remote Sens., vol. 58, no. 10, pp. 7232–7246, 2020.\n[14] H. Chen, C. Wu, B. Du and L. Zhang, \"Deep Siamese Multi-scale Convolutional Network for Change Detection in Multi-temporal VHR Images,\" 2019 10th International Workshop on the Analysis of Multitemporal Remote Sensing Images (MultiTemp), Shanghai, China, 2019, pp. 1-4.  \n[15] H. Chen, C. Wu, B. Du and L. Zhang, \"Change Detection in Multi-temporal VHR Images Based on Deep Siamese Multi-scale Convolutional Neural Network,\" arXiv preprint arXiv:1912.08628, 2020. https:\u002F\u002Farxiv.org\u002Fabs\u002F1906.11479.  \n[16] S. Saha, F. Bovolo, and L. Bruzzone, “Unsupervised deep change vector analysis for multiple-change detection in VHR Images,” IEEE Trans. Geosci. Remote Sens., vol. 57, no. 6, pp. 3677–3693, 2019.  \n[17] L. Ru, B. Du and C. Wu, \"Multi-Temporal Scene Classification and Scene Change Detection with Correlation based Fusion,\" in IEEE Transactions on Image Processing, doi: 10.1109\u002FTIP.2020.3039328.  \n[18] S. Fang, K. Li, J. Shao and Z. Li, \"SNUNet-CD: A Densely Connected Siamese Network for Change Detection of VHR Images,\" in IEEE Geoscience and Remote Sensing Letters, doi: 10.1109\u002FLGRS.2021.3056416.  \n## 问答\n**如有任何问题，请 [联系我们。](mailto:Qschrx@gmail.com)**","# ChangeDetectionRepository 快速上手指南\n\n本仓库提供了遥感图像变化检测的传统方法及深度学习方法的 Python 实现（如 SFA, MAD, SiamCRNN 等），并包含部分多时相数据集。适用于变化检测研究及遥感图像解译。\n\n## 1. 环境准备\n\n本项目依赖多种科学计算与深度学习库，具体取决于您使用的算法模块。建议配置如下环境：\n\n- **操作系统**: Linux \u002F macOS \u002F Windows\n- **Python**: 3.x 版本\n- **核心依赖**:\n  - `numpy`, `scipy`, `scikit-learn` (用于传统方法如 SFA, MAD, PCA-Kmeans)\n  - `tensorflow` 或 `pytorch` (用于深度学习方法如 DSFA, SiamCRNN, SNUNet 等)\n  - `caffe` (部分原始实现可能需要，如 FDCNN, DCVA)\n\n> **提示**: 若存在 `requirements.txt` 文件，请优先使用该文件安装依赖。\n\n## 2. 安装步骤\n\n通过 Git 克隆本仓库到本地目录：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FChangeDetectionRepository.git\ncd ChangeDetectionRepository\n```\n\n若网络访问受限，可考虑配置国内 Git 镜像或使用代理加速下载。\n\n## 3. 基本使用\n\n### 查看数据集\n仓库根目录下包含 `Dataset` 文件夹，存放了部分多时相数据集：\n```bash\nls Dataset\n```\n\n### 运行算法\n根据您的需求选择相应的算法模块。例如，对于传统方法中的 SFA 或 MAD，直接查找对应的 Python 脚本执行。\n\n**示例流程（以通用 Python 脚本为例）：**\n```bash\npython your_script_name.py --input_path .\u002Fpath\u002Fto\u002Fdata\n```\n\n### 参考外部实现\n部分深度学习方法（如 DSFA, SiamCRNN, KPCA-MNet 等）在本文档中提供了链接指向其独立的官方实现仓库，建议结合以下链接获取完整代码：\n- DSFA: `https:\u002F\u002Fgithub.com\u002Frulixiang\u002FDSFANet`\n- SiamCRNN: `https:\u002F\u002Fgithub.com\u002FI-Hope-Peace\u002FSiamCRNN`\n- SNUNet: `https:\u002F\u002Fgithub.com\u002Flikyoo\u002FSiam-NestedUNet`\n\n更多详细信息请参考各章节下的参考文献及说明。","某遥感科研团队受项目委托，需在一周内完成对两期高分卫星影像的城市扩张变化检测任务。\n\n### 没有 ChangeDetectionRepository 时\n- 传统算法如 SFA、MAD 的源码分散在不同论文附件或旧网站中，查找困难且版本陈旧。\n- 深度学习模型如 SiamCRNN 缺乏现成框架，从零搭建双塔网络结构耗时耗力且易出错。\n- 多时相数据集需要自行收集清洗，数据格式不统一导致预处理工作繁重且容易引入误差。\n- 不同方法的代码风格差异大，难以在同一环境下进行公平的性能对比测试和参数调优。\n\n### 使用 ChangeDetectionRepository 后\n- 直接获取 SFA、MAD 等经典算法的 Python 实现，无需重复造轮子，代码可直接运行。\n- 内置 DSFA、SiamCRNN 等深度网络代码，省去了复杂的模型构建与调试过程，降低技术门槛。\n- 仓库附带部分多时相数据集，无需额外寻找资源即可开始实验验证，加速数据准备阶段。\n- 统一代码规范支持快速切换不同算法，显著缩短了从理论到结果的验证周期，便于横向对比。\n\n它让研究人员能专注于算法效果分析而非底层代码复现，极大提升了遥感变化检测的研究效率。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FChenHongruixuan_ChangeDetectionRepository_692d9265.png","ChenHongruixuan","Sapere Aude","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FChenHongruixuan_7c736af5.jpg","Merely this and nothing more","The University of Tokyo \u002F RIKEN AIP",null,"Qschrx@gmail.com","https:\u002F\u002Fscholar.google.com\u002Fcitations?user=XOk4Cf0AAAAJ&hl=zh-CN&oi=ao","https:\u002F\u002Fgithub.com\u002FChenHongruixuan",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,526,108,"2026-04-05T00:53:15","MIT","未说明",{"notes":95,"python":93,"dependencies":96},"仓库提供多种变化检测方法的 Python 实现，涵盖传统算法与深度学习模型。涉及 TensorFlow、PyTorch、Caffe 及 MATLAB 等框架，部分代码需参考外部链接。包含部分多时相数据集。",[93],[14,13],[99,100,101,102,103,104],"remote-sensing","change-detection","image-processing","multi-temporal","deep-learning","python","2026-03-27T02:49:30.150509","2026-04-06T05:16:49.146979",[108,113,118,123,128],{"id":109,"question_zh":110,"answer_zh":111,"source_url":112},3336,"MAD 模块中找不到 `util` 文件或 `show_variates` 函数怎么办？","代码已更新，请在 MAD 文件夹下查找测试示例。`show_variates` 函数用于显示 MAD 结果的各个波段，如果找不到 `util` 模块，可以直接注释掉该行代码。由于仓库是临时搭建的，可能存在一些代码错误，我们会尽快修正。","https:\u002F\u002Fgithub.com\u002FChenHongruixuan\u002FChangeDetectionRepository\u002Fissues\u002F1",{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},3337,"哪里可以找到 KPCA-MNet 的代码？","KPCA-MNet 的论文目前仍在审稿中。待论文被正式接受后，我们才会在此仓库分享相关代码。","https:\u002F\u002Fgithub.com\u002FChenHongruixuan\u002FChangeDetectionRepository\u002Fissues\u002F2",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},3338,"MAD 算法中关于特征向量的计算公式是否有误？","用于 MAD 的数据应当是中心化的，因此代码中使用的是 `center_X` 和 `center_Y` 而不是原始图像数据。根据 CCA 公式，`eigenvector_Y` 不需要除以特征值，它是通过与 `eigenvector_X` 的关系求解的。具体可参考 Nilsen 的原始 MATLAB 实现。","https:\u002F\u002Fgithub.com\u002FChenHongruixuan\u002FChangeDetectionRepository\u002Fissues\u002F3",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},3339,"SFA 方法得到的结果与仓库提供的结果不一致怎么办？","如果结果可视化效果好且准确率合理，建议尝试调整超参数以优化结果一致性。","https:\u002F\u002Fgithub.com\u002FChenHongruixuan\u002FChangeDetectionRepository\u002Fissues\u002F5",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},3340,"数据集中的 `2000TM` 文件是什么含义，如何使用？","`2000TM` 是变化前的图像文件。在使用时，需要借助 GDAL 库来读取该图像文件。","https:\u002F\u002Fgithub.com\u002FChenHongruixuan\u002FChangeDetectionRepository\u002Fissues\u002F6",[]]