[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MaurizioFD--RecSys2019_DeepLearning_Evaluation":3,"tool-MaurizioFD--RecSys2019_DeepLearning_Evaluation":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":80,"owner_email":81,"owner_twitter":82,"owner_website":83,"owner_url":84,"languages":85,"stars":117,"forks":118,"last_commit_at":119,"license":120,"difficulty_score":121,"env_os":122,"env_gpu":123,"env_ram":123,"env_deps":124,"category_tags":136,"github_topics":137,"view_count":23,"oss_zip_url":158,"oss_zip_packed_at":158,"status":16,"created_at":159,"updated_at":160,"faqs":161,"releases":199},2037,"MaurizioFD\u002FRecSys2019_DeepLearning_Evaluation","RecSys2019_DeepLearning_Evaluation","This is the repository of our article published in RecSys 2019 \"Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches\" and of several follow-up studies.","RecSys2019_DeepLearning_Evaluation 是一个专注于推荐系统领域深度学习模型评估与复现的开源项目。它系统性地重新实现了近年来主流的神经推荐算法，并通过统一的实验框架进行公平对比，揭示了部分研究中存在评估不严谨、结果难以复现的问题。该项目的核心贡献在于揭示了推荐系统研究中普遍存在的“伪进步”现象——许多新模型在相同条件下并未显著优于传统方法，促使学界重新审视评估标准。适合推荐系统领域的研究人员、算法工程师和对模型复现有需求的开发者使用，尤其适合希望基于可靠基线开展新研究的群体。项目提供了完整可运行的代码、超参数配置和详细实验结果，涵盖多篇顶会论文（如RecSys、CIKM、TOIS），并已修复了HR\u002FNDCG等关键指标的实现误差，确保评估结果的准确性。项目团队持续维护并欢迎合作，是推动推荐系统研究走向更透明、可复现方向的重要资源。","# DeepLearning RS Evaluation\n\nThis repository was developed by \u003Ca href=\"https:\u002F\u002Fmauriziofd.github.io\u002F\" target=\"_blank\">Maurizio Ferrari Dacrema\u003C\u002Fa>, postdoctoral researcher at Politecnico di Milano. See our [website](http:\u002F\u002Frecsys.deib.polimi.it\u002F) for more information on our research group. This reporsitory contains the source code of the following articles:\n* **\"Are We Really Making Much Progress? A Worrying Analysis of Recent Neural Recommendation Approaches\"**, **RecSys 2019** [BibTex](https:\u002F\u002Fdblp.uni-trier.de\u002Frec\u002Fbibtex\u002Fconf\u002Frecsys\u002FDacremaCJ19).\nFull text available on [ACM DL (open)](https:\u002F\u002Fdl.acm.org\u002Fauthorize?N684126), [ResearchGate](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F334506947_Are_We_Really_Making_Much_Progress_A_Worrying_Analysis_of_Recent_Neural_Recommendation_Approaches) or [ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.06902), source code of our experiments and full results are available [here](https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation).\nThe [slides](Slides\u002FRecSys2019_DeepLearning_Evaluation_Slides.pdf) and [poster](Slides\u002FRecSys2019_DeepLearning_Evaluation_Poster.pdf) are also available.\n* **\"A Troubling Analysis of Reproducibility and Progress in Recommender Systems Research\"**. **ACM TOIS 2021** The paper is available on [ACM DL (open)](  https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3434185?cid=99659309976), [ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07698).\n* **\"Critically Examining the Claimed Value of Convolutions over User-Item Embedding Maps for Recommender Systems\"**, **CIKM 2020**  [ACM DL (open)](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3340531.3411901?cid=99659309976), [ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11893), [BibTex](https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fcikm\u002FDacremaPCJ20.html?view=bibtex) see related documentation [HERE](#Ablation-experiment-for-CNN-algorithms-on-embeddings).\n* **\"Methodological Issues in Recommender Systems Research (Extended Abstract)\"**, **IJCAI 2020** [BibTex](https:\u002F\u002Fdblp.uni-trier.de\u002Frec\u002Fbibtex\u002Fconf\u002Fijcai\u002FDacremaCJ20), [PDF](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F650).\n\nA small example on how to use the baseline models is in _run_example_usage.py_.\n\nWe are still actively pursuing this research direction in evaluation and reproducibility, we are open to collaboration with other reseachers. Follow our project on [ResearchGate](https:\u002F\u002Fwww.researchgate.net\u002Fproject\u002FRecommender-systems-reproducibility-and-evaluation)!\n\nPlease cite our articles if you use this repository or our implementations of baseline algorithms, remember also to cite the original authors if you use our porting of the DL algorithms. The BibTex code is linked above, next to the article.\n\n\n**Update 06\u002F11\u002F2021:**\nA [recent paper](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3460231.3478848) observed some issues in the description and implementation of HR and NDCG.\nThe issue is present under random holdout, hence it only affects very few NDCG results. The implementation has been updated to fix the issue, the additional material contains the updated metric description and results for the few affected cases.\nThe analysis and conclusions of the study are not affected.\n\n\n## Full results and hyperparameters\nThe full results and corresponding hyperparameters for all DL algorithms are accessible [HERE](DL_Evaluation_TOIS_Additional_material.pdf).\nFor information on the requirements and how to install this repository, see the following [Installation](#Installation) section.\n\n\n## Code organization\nThis repository is organized in several subfolders.\n\n#### Deep Learning Algorithms\nThe Deep Learning algorithms are all contained in the _Conferences_ folder and further divided in the conferences they were published in.\nFor each DL algorithm the repository contains two subfolders:\n* A folder named \"_github\" which contains the full original repository or \"_original\" which contains the source code provided by the authors upon request, with the minor fixes needed for the code to run.\n* A folder named \"_our_interface\" which contains the python wrappers needed to allow its testing in our framework. The main class for that algorithm has the \"Wrapper\" suffix in its name. This folder also contains the functions needed to read and split the data in the appropriate way.\n\nNote that in some cases the original repository contained also the data split used by the original authors, those are included as well.\n\n#### Baseline algorithms\nFolders like \"KNN\", \"GraphBased\", \"MatrixFactorization\", \"SLIM_BPR\", \"SLIM_ElasticNet\" and \"EASE_R\" contain all the baseline algorithms we used in our experiments.\nThe complete list is as follows, details on all algorithms and references can be found [HERE](DL_Evaluation_TOIS_Additional_material.pdf):\n* Random: recommends a list of random items,\n* TopPop: recommends the most popular items,\n* UserKNN: User-based collaborative KNN,\n* ItemKNN: Item-based collaborative KNN,\n* UserKNN CBF: User-based content-based KNN,\n* ItemKNN CBF: Item-based content-based KNN,\n* UserKNN CFCBF: User-based hybrid content-based collaborative KNN,\n* ItemKNN CFCBF: Item-based hybrid content-based collaborative KNN,\n* P3alpha: collaborative graph-based algorithm,\n* RP3beta: collaborative graph-based algorithm with reranking,\n* PureSVD: SVD decomposition of the user-item matrix,\n* NMF: Non-negative matrix factorization of the user-item matrix,\n* IALS: Implicit alternating least squares,\n* MatrixFactorization BPR (BPRMF): machine learning based matrix factorization optimizing ranking with BPR,\n* MatrixFactorization FunkSVD: machine learning based matrix factorization optimizing prediction accuracy with MSE,\n* EASE_R: collaborative shallow autoencoder,\n* SLIM BPR: Item-based machine learning algorithm optimizing ranking with BPR,\n* SLIM ElasticNet: Item-based machine learning algorithm optimizing prediction accuracy with MSE.\n\nThe following similarities are available for all KNN models: cosine, adjusted cosine, pearson correlation, dice, jaccard, asymmetric cosine, tversky, euclidean\n\n\n#### Evaluation\nThe folder _Base.Evaluation_ contains the two evaluator objects (_EvaluatorHoldout_, _EvaluatorNegativeSample_) which compute all the metrics we report.\n\n#### Data\nThe data to be used for each experiment is gathered from specific _DataReader_ objects within each DL algoritm's folder. \nThose will load the original data split, if available. If not, automatically download the dataset and perform the split with the appropriate methodology. If the dataset cannot be downloaded automatically, a console message will display the link at which the dataset can be manually downloaded and instructions on where the user should save the compressed file.\nThe data of ConvNCF cannot be automatically handled and should be manually downloaded [HERE](https:\u002F\u002Fpolimi365-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002F10322330_polimi_it\u002FEbAK6lYNM6NMh-b5tDUDC-kBxGLQkLmQYm0dXWuHaykQrQ?e=IOzbTW) and decompressed in folder \"Conferences\u002FIJCAI\u002FConvNCF_github\u002FData\". \n\nThe folder _Data_manager_ contains a number of _DataReader_ objects each associated to a specific dataset, which are used to read datasets for which we did not have the original split. \n\nWhenever a new dataset is downloaded and parsed, the preprocessed data is saved in a new folder called _Data_manager_split_datasets_, which contains a subfolder for each dataset. The data split used for the experimental evaluation is saved within the result folder for the relevant algorithm, in a subfolder _data_ . \n\n#### Hyperparameter optimization\nFolder _ParameterTuning_ contains all the code required to tune the hyperparameters of the baselines. The script _run_parameter_search_ contains the fixed hyperparameters search space used in all our experiments.\nThe object _SearchBayesianSkopt_ does the hyperparameter optimization for a given recommender instance and hyperparameter space, saving the explored configuration and corresponding recommendation quality. \n\nIf you want to execute ConvNCF make sure you extract the data files in folder Conferences\u002FConvNCF\u002FConvNCF_github\u002FData and ConvolutionRS\u002FConvNCF\u002FConvNCF_github\u002FData\n\n\n\n## Run the experiments \n\nSee see the following [Installation](#Installation) section for information on how to install this repository.\nAfter the installation is complete you can run the experiments.\n\n### Comparison with baselines algorithms\n\nAll experiments related to a DL algorithm reported in our paper can be executed by running the corresponding script, which is preceeded by _run__, the conference name and the year of publication.\nThe scripts have the following boolean optional parameters (all default values are False except for the print-results flag):\n* '-b' or '--baseline_tune': Run baseline hyperparameter search\n* '-a' or '--DL_article_default': Train the deep learning algorithm with the original hyperparameters\n* '-p' or '--print_results': Generate the latex tables for this experiment\n\n\nFor example, if you want to run all the experiments for SpectralCF, you should run this command:\n```console\npython run_RecSys_18_SpectralCF.py -b True -a True -p True\n```\n\n\n\nThe script will:\n* Load and split the data.\n* Run the bayesian hyperparameter optimization on all baselines, saving the best values found.\n* Run the fit and test of the DL algorithm\n* Create the latex code of the result tables, as well as plot the data splits, when required. \n* The results can be accessed in the _result_experiments_ folder.\n\n\n\n### Ablation experiment for CNN algorithms on embeddings\n\nAll experiments related to a Convolution algorithm reported in our paper can be executed by running the corresponding script, which is preceeded by _run__, the conference name and the year of publication.\nFor example, if you want to run the experiments for ConvNCF, you should run this command:\n```console\npython run_IJCAI_18_ConvNCF_CNN_embeddings.py\n```\n\nThe training of baselines is enabled by default, if you want to disable it use:\n```console\npython run_IJCAI_18_ConvNCF_CNN_embeddings.py --run_baselines False\n```\n\nThe script will:\n* Load and split the data.\n* (if required) Run the bayesian hyperparameter optimization on all baselines, saving the best values found.\n* Run the pretraining of the embeddings, if needed.\n* Run Study 1, applying 20 permutations of the pretrained embeddings and for each evaluating the pretraining model, fitting the Convolution algorithm on the full map and evaluating it for each component: full map, diagonal (element wise), off-diagonal (embeddings correlation).\n* Run Study 2, reading the previously generated permutations and fitting the model on the specified portion of the interaction map: full map, diagonal (element wise), off-diagonal (embeddings correlation).\n* Create the latex code of the result tables.\n* The results can be accessed in the _result_experiments_ folder.\n\n\n\n\n\n\n\n## Installation\n\nNote that this repository requires Python 3.6\n\nFirst we suggest you create an environment for this project using virtualenv (or another tool like conda)\n\nFirst checkout this repository, then enter in the repository folder and run this commands to create and activate a new environment:\n\nIf you are using virtualenv:\n```console\nvirtualenv -p python3 DLevaluation\nsource DLevaluation\u002Fbin\u002Factivate\n```\nIf you are using conda:\n```console\nconda create -n DLevaluation python=3.6 anaconda\nconda activate DLevaluation\n```\n\nThen if you want to run the experiments on CPU you should install all the requirements and dependencies using the following command. If you wish a GPU installation please install the dependencies as described in subsection [Installation on GPU](#Installation-on-GPU).\n```console\npip install -r requirements.txt\n```\n\nAt this point, having installed all dependencies for either CPU or GPU usage, you have to compile all Cython algorithms.\n\nIn order to compile you must first have installed: _gcc_ and _python3 dev_. Under Linux those can be installed with the following commands:\n```console\nsudo apt install gcc \nsudo apt-get install python3-dev\n```\nIf you are using Windows as operating system, the installation procedure is a bit more complex. You may refer to [THIS](https:\u002F\u002Fgithub.com\u002Fcython\u002Fcython\u002Fwiki\u002FInstallingOnWindows) guide.\n\nNow you can compile all Cython algorithms by running the following command. The script will compile within the current active environment. The code has been developed for Linux and Windows platforms. During the compilation you may see some warnings. \n \n```console\npython run_compile_all_cython.py\n```\n\n\n#### Installation on GPU\nIn order to run the experiments on GPU you should install requirements and dependencies in the following way. \nThis commands only work if you are using conda to manage your virtual environment. \nIt is possible to install them using other tools or pip but it may prove to be a much more complex task.\n```console\nconda install tensorflow-gpu\nconda install -c anaconda keras-gpu\nconda install -c hcc dm-sonnet-gpu\n\npip install -r requirements_gpu.txt\n```\n\n### Matlab engine\nIn addition to the repository dependencies, KDD CollaborativeDL also requires the Matlab engine, due to the fact that the algorithm is developed in Matlab. \nTo install the engine you can use a script provided directly with your Matlab distribution, as described in the [Matlab Documentation](https:\u002F\u002Fwww.mathworks.com\u002Fhelp\u002Fmatlab\u002Fmatlab_external\u002Finstall-the-matlab-engine-for-python.html).\nThe algorithm requires also a GSL distribution, whose installation folder can be provided as a parameter in the fit function of our Python wrapper. Please refer to the original [CollaborativeDL README](Conferences\u002FKDD\u002FCollaborativeDL_github_matlab\u002FREADME.md) for all installation details.\n\n \n","# 深度学习推荐系统评估\n\n本仓库由米兰理工大学博士后研究员\u003Ca href=\"https:\u002F\u002Fmauriziofd.github.io\u002F\" target=\"_blank\">Maurizio Ferrari Dacrema\u003C\u002Fa>开发。欲了解我们研究组的更多信息，请访问我们的[网站](http:\u002F\u002Frecsys.deib.polimi.it\u002F)。本仓库包含以下论文的源代码：\n* **“我们真的取得了很大进展吗？对近期神经网络推荐方法的令人担忧的分析”**, **RecSys 2019** [BibTex](https:\u002F\u002Fdblp.uni-trier.de\u002Frec\u002Fbibtex\u002Fconf\u002Frecsys\u002FDacremaCJ19)。\n全文可在[ACM DL（开放）](https:\u002F\u002Fdl.acm.org\u002Fauthorize?N684126)、[ResearchGate](https:\u002F\u002Fwww.researchgate.net\u002Fpublication\u002F334506947_Are_We_Really_Making_Much_Progress_A_Worrying_Analysis_of_Recent_Neural_Recommendation_Approaches)或[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1907.06902)上获取，我们实验的源代码及完整结果可在此处获得：[https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation]。\n此外，还提供了[幻灯片](Slides\u002FRecSys2019_DeepLearning_Evaluation_Slides.pdf)和[海报](Slides\u002FRecSys2019_DeepLearning_Evaluation_Poster.pdf)。\n* **“推荐系统研究中可重复性和进展的令人不安的分析”**。**ACM TOIS 2021** 论文可在[ACM DL（开放）](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3434185?cid=99659309976)、[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.07698)上获取。\n* **“对推荐系统中卷积与用户-物品嵌入映射价值主张的批判性审视”**, **CIKM 2020** [ACM DL（开放）](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3340531.3411901?cid=99659309976)、[ArXiv](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.11893)、[BibTex](https:\u002F\u002Fdblp.org\u002Frec\u002Fconf\u002Fcikm\u002FDacremaPCJ20.html?view=bibtex)，相关文档请参阅[此处](#Ablation-experiment-for-CNN-algorithms-on-embeddings)。\n* **“推荐系统研究中的方法论问题（扩展摘要）”**, **IJCAI 2020** [BibTex](https:\u002F\u002Fdblp.uni-trier.de\u002Frec\u002Fbibtex\u002Fconf\u002Fijcai\u002FDacremaCJ20)、[PDF](https:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F2020\u002F650)。\n\n在_run_example_usage.py_中提供了一个如何使用基线模型的小示例。\n\n我们仍在积极致力于评估与可重复性方面的研究方向，并欢迎与其他研究人员开展合作。请关注我们在[ResearchGate](https:\u002F\u002Fwww.researchgate.net\u002Fproject\u002FRecommender-systems-reproducibility-and-evaluation)上的项目！\n\n如果您使用本仓库或我们实现的基线算法，请务必引用我们的论文；如果使用我们移植的深度学习算法，也请记得引用原作者。上述论文的BibTex代码已附于每篇论文旁边。\n\n\n**更新日期：2021年6月11日：**\n一篇[最新论文](https:\u002F\u002Fdl.acm.org\u002Fdoi\u002F10.1145\u002F3460231.3478848)指出，在HR和NDCG的描述与实现中存在一些问题。\n该问题仅出现在随机留出测试集的情况下，因此只影响极少数NDCG结果。目前我们已更新了实现以修复此问题，附加材料中包含了更新后的指标说明以及受影响的少数案例的结果。\n不过，该研究的分析与结论并未受到影响。\n\n\n## 完整结果与超参数\n所有深度学习算法的完整结果及对应的超参数可在此处获取：[DL_Evaluation_TOIS_Additional_material.pdf]。\n有关本仓库的安装要求及安装方法，请参阅下文的[安装](#安装)部分。\n\n## 代码组织\n本仓库分为多个子文件夹进行组织。\n\n#### 深度学习算法\n深度学习算法均存放于_Conferences_文件夹中，并按其发表的会议进一步细分。\n\n对于每个深度学习算法，仓库包含两个子文件夹：\n* 名为“_github”的文件夹，包含完整的原始仓库；或名为“_original”的文件夹，包含作者应要求提供的源代码，并附有运行代码所需的少量修正。\n* 名为“_our_interface”的文件夹，包含在我们框架中测试该算法所需的Python封装。该算法的主要类名后缀为“Wrapper”。此文件夹还包含用于以适当方式读取和分割数据的函数。\n\n请注意，在某些情况下，原始仓库也包含了原始作者所用的数据分割，这些内容同样被纳入其中。\n\n#### 基线算法\n诸如“KNN”、“GraphBased”、“MatrixFactorization”、“SLIM_BPR”、“SLIM_ElasticNet”和“EASE_R”等文件夹包含了我们在实验中使用的所有基线算法。\n完整列表如下，所有算法的详细信息及参考文献请见[这里](DL_Evaluation_TOIS_Additional_material.pdf)：\n* Random：随机推荐一系列项目；\n* TopPop：推荐最受欢迎的项目；\n* UserKNN：基于用户的协同KNN；\n* ItemKNN：基于项目的协同KNN；\n* UserKNN CBF：基于用户的混合内容型KNN；\n* ItemKNN CBF：基于项目的混合内容型KNN；\n* UserKNN CFCBF：基于用户的混合内容-协同KNN；\n* ItemKNN CFCBF：基于项目的混合内容-协同KNN；\n* P3alpha：基于图的协同算法；\n* RP3beta：带重排序的基于图的协同算法；\n* PureSVD：用户-项目矩阵的SVD分解；\n* NMF：用户-项目矩阵的非负矩阵分解；\n* IALS：隐式交替最小二乘法；\n* MatrixFactorization BPR（BPRMF）：基于机器学习的矩阵分解，以BPR优化排序；\n* MatrixFactorization FunkSVD：基于机器学习的矩阵分解，以MSE优化预测精度；\n* EASE_R：协同浅层自编码器；\n* SLIM BPR：基于项目的机器学习算法，以BPR优化排序；\n* SLIM ElasticNet：基于项目的机器学习算法，以MSE优化预测精度。\n\n以下相似性指标适用于所有KNN模型：余弦、调整余弦、皮尔逊相关系数、Dice、Jaccard、非对称余弦、Tversky、欧几里得\n\n\n#### 评估\n_Base.Evaluation_文件夹包含两个评估器对象（_EvaluatorHoldout_、_EvaluatorNegativeSample_），它们计算我们报告的所有指标。\n\n#### 数据\n每个实验所用的数据均由各深度学习算法文件夹内的特定_DataReader_对象收集。\n如果可用，这些对象将加载原始数据分割；若不可用，则自动下载数据集并采用合适的方法进行分割。如果数据集无法自动下载，控制台会显示可手动下载数据集的链接以及用户应将压缩文件保存到何处的说明。\nConvNCF的数据无法自动处理，需手动下载[这里](https:\u002F\u002Fpolimi365-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002F10322330_polimi_it\u002FEbAK6lYNM6NMh-b5tDUDC-kBxGLQkLmQYm0dXWuHaykQrQ?e=IOzbTW)，并解压至“Conferences\u002FIJCAI\u002FConvNCF_github\u002FData”文件夹。\n\n_Data_manager_文件夹包含若干个_DataReader_对象，每个对象对应一个特定数据集，用于读取那些我们没有原始分割的数据集。\n\n每当下载并解析新数据集时，预处理后的数据会被保存在一个名为_Data_manager_split_datasets_的新文件夹中，该文件夹为每个数据集创建一个子文件夹。用于实验评估的数据分割则保存在相应算法的结果文件夹中的_data子文件夹内。\n\n#### 超参数优化\n_ParameterTuning_文件夹包含所有用于调优基线算法超参数的代码。脚本_run_parameter_search_包含了我们所有实验中使用的固定超参数搜索空间。\n_SearchBayesianSkopt_对象针对给定的推荐器实例和超参数空间进行超参数优化，保存探索过的配置及其对应的推荐质量。\n\n如果您想运行ConvNCF，请确保将数据文件解压至Conferences\u002FConvNCF\u002FConvNCF_github\u002FData和ConvolutionRS\u002FConvNCF\u002FConvNCF_github\u002FData文件夹中。\n\n\n\n## 运行实验\n\n有关如何安装本仓库的信息，请参阅下面的[安装](#安装)部分。\n安装完成后，您即可运行实验。\n\n### 与基线算法的比较\n\n我们论文中报道的所有深度学习算法相关的实验，均可通过运行相应的脚本执行，脚本名称前缀为_run__，后接会议名称和出版年份。\n这些脚本具有以下布尔可选参数（除打印结果标志外，所有默认值均为False）：\n* '-b' 或 '--baseline_tune'：运行基线超参数搜索\n* '-a' 或 '--DL_article_default'：以原始超参数训练深度学习算法\n* '-p' 或 '--print_results'：生成本次实验的LaTeX表格\n\n\n例如，如果您想运行SpectralCF的所有实验，应运行以下命令：\n```console\npython run_RecSys_18_SpectralCF.py -b True -a True -p True\n```\n\n\n\n该脚本将：\n* 加载并分割数据；\n* 对所有基线进行贝叶斯超参数优化，保存找到的最佳值；\n* 运行深度学习算法的拟合与测试；\n* 创建结果表格的LaTeX代码，并在需要时绘制数据分割图；\n* 结果可在_result_experiments_文件夹中访问。\n\n### 基于嵌入的CNN算法消融实验\n\n我们论文中报道的所有与卷积算法相关的实验，均可通过运行相应的脚本完成。这些脚本均以__run__开头，后接会议名称和发表年份。\n\n例如，如果您想运行ConvNCF的实验，应执行以下命令：\n```console\npython run_IJCAI_18_ConvNCF_CNN_embeddings.py\n```\n\n默认情况下会启用基线模型的训练。如果您希望禁用基线训练，请使用以下命令：\n```console\npython run_IJCAI_18_ConvNCF_CNN_embeddings.py --run_baselines False\n```\n\n该脚本将执行以下操作：\n* 加载并分割数据。\n* （如果需要）对所有基线模型进行贝叶斯超参数优化，并保存找到的最佳值。\n* 如果需要，对嵌入进行预训练。\n* 运行研究1，对预训练的嵌入进行20次置换，并针对每次置换评估预训练模型，分别在完整映射、对角线（逐元素）以及非对角线（嵌入相关性）上拟合卷积算法并进行评估。\n* 运行研究2，读取之前生成的置换结果，并在指定的部分交互映射上拟合模型：完整映射、对角线（逐元素）、非对角线（嵌入相关性）。\n* 生成结果表格的LaTeX代码。\n* 结果可从_result_experiments_文件夹中获取。\n\n\n\n\n\n## 安装说明\n\n请注意，此仓库需要Python 3.6。\n\n首先，我们建议您使用virtualenv（或conda等其他工具）为此项目创建一个环境。\n\n首先克隆本仓库，然后进入仓库文件夹，并运行以下命令创建并激活新环境：\n\n如果您使用的是virtualenv：\n```console\nvirtualenv -p python3 DLevaluation\nsource DLevaluation\u002Fbin\u002Factivate\n```\n如果您使用的是conda：\n```console\nconda create -n DLevaluation python=3.6 anaconda\nconda activate DLevaluation\n```\n\n接下来，如果您想在CPU上运行实验，应使用以下命令安装所有必需的依赖项和库。如果您希望使用GPU，请按照【GPU安装】小节中的说明安装依赖项。\n```console\npip install -r requirements.txt\n```\n\n至此，无论您选择在CPU还是GPU上运行，都已安装完所有依赖项，接下来您需要编译所有Cython算法。\n\n要编译Cython算法，您必须先安装_gcc_和_python3 dev_。在Linux下，可通过以下命令安装：\n```console\nsudo apt install gcc \nsudo apt-get install python3-dev\n```\n如果您使用的是Windows操作系统，安装过程会稍微复杂一些。您可以参考[THIS](https:\u002F\u002Fgithub.com\u002Fcython\u002Fcython\u002Fwiki\u002FInstallingOnWindows)指南。\n\n现在，您可以运行以下命令编译所有Cython算法。该脚本将在当前激活的环境中进行编译。代码已针对Linux和Windows平台开发。编译过程中可能会出现一些警告。\n\n```console\npython run_compile_all_cython.py\n```\n\n\n#### GPU安装说明\n如果您想在GPU上运行实验，应按以下方式安装所需的依赖项和库。这些命令仅适用于使用conda管理虚拟环境的情况。您也可以使用其他工具或pip安装，但可能会变得更加复杂。\n\n```console\nconda install tensorflow-gpu\nconda install -c anaconda keras-gpu\nconda install -c hcc dm-sonnet-gpu\n\npip install -r requirements_gpu.txt\n```\n\n### Matlab引擎\n除了仓库的依赖项外，KDD CollaborativeDL还需要Matlab引擎，因为该算法是在Matlab中开发的。要安装Matlab引擎，您可以使用Matlab发行版自带的脚本，具体步骤请参阅[Matlab文档](https:\u002F\u002Fwww.mathworks.com\u002Fhelp\u002Fmatlab\u002Fmatlab_external\u002Finstall-the-matlab-engine-for-python.html)。\n\n此外，该算法还需要GSL库，其安装路径可在我们的Python封装器的fit函数中作为参数提供。有关安装详情，请参阅原始[CollaborativeDL README](Conferences\u002FKDD\u002FCollaborativeDL_github_matlab\u002FREADME.md)。","# RecSys2019_DeepLearning_Evaluation 快速上手指南\n\n## 环境准备\n\n- **系统要求**：Python 3.6（必须）\n- **前置依赖**：\n  - Linux：`gcc` 和 `python3-dev`\n    ```bash\n    sudo apt install gcc\n    sudo apt-get install python3-dev\n    ```\n  - Windows：需安装 Visual Studio C++ 构建工具（含 Cython 编译支持）\n\n## 安装步骤\n\n1. 克隆仓库并进入目录：\n   ```bash\n   git clone https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation.git\n   cd RecSys2019_DeepLearning_Evaluation\n   ```\n\n2. 创建并激活虚拟环境（推荐使用 `conda`，国内用户可使用清华源加速）：\n   ```bash\n   conda create -n DLevaluation python=3.6 anaconda -c https:\u002F\u002Fmirrors.tuna.tsinghua.edu.cn\u002Fanaconda\u002Fpkgs\u002Fmain\u002F\n   conda activate DLevaluation\n   ```\n\n3. 安装依赖（使用清华源加速 pip）：\n   ```bash\n   pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n   ```\n\n4. 编译 Cython 模块：\n   ```bash\n   python setup.py build_ext --inplace\n   ```\n\n> 注意：ConvNCF 数据需手动下载并解压至 `Conferences\u002FIJCAI\u002FConvNCF_github\u002FData`，下载链接：[https:\u002F\u002Fpolimi365-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002F10322330_polimi_it\u002FEbAK6lYNM6NMh-b5tDUDC-kBxGLQkLmQYm0dXWuHaykQrQ?e=IOzbTW](https:\u002F\u002Fpolimi365-my.sharepoint.com\u002F:u:\u002Fg\u002Fpersonal\u002F10322330_polimi_it\u002FEbAK6lYNM6NMh-b5tDUDC-kBxGLQkLmQYm0dXWuHaykQrQ?e=IOzbTW)\n\n## 基本使用\n\n运行 SpectralCF 的完整实验（包含基线调参、原论文参数训练和结果输出）：\n\n```bash\npython run_RecSys_18_SpectralCF.py -b True -a True -p True\n```\n\n执行后：\n- 数据自动下载并分割（如未提供原始划分）\n- 基线模型超参数优化\n- SpectralCF 模型训练与评估\n- 结果保存至 `result_experiments\u002F` 文件夹\n- 自动生成 LaTeX 表格用于论文排版\n\n> 所有实验结果和超参数详见：[DL_Evaluation_TOIS_Additional_material.pdf](DL_Evaluation_TOIS_Additional_material.pdf)","某电商平台的推荐算法团队正在升级其个性化推荐系统，希望引入最新神经网络模型提升点击率，但团队在复现多篇顶会论文中的模型时屡屡失败，评估结果与论文宣称的“显著提升”严重不符。\n\n### 没有 RecSys2019_DeepLearning_Evaluation 时\n- 团队尝试复现五篇RecSys 2018–2020的神经推荐模型，但多数代码无法运行，依赖库版本混乱，甚至缺少关键配置文件。\n- 评估指标（如NDCG、HR）计算方式不一致，部分模型使用了错误的随机划分策略，导致结果虚高，团队误判模型性能。\n- 不同论文的超参数设置混乱，团队不得不手动逐篇对照、猜测合理范围，耗费两周仍无法建立公平对比基准。\n- 无法判断是模型本身无效，还是实现有误，团队内部对“是否值得投入工程化”产生严重分歧。\n- 试图与论文作者联系获取源码，但多数作者未回应，项目陷入停滞。\n\n### 使用 RecSys2019_DeepLearning_Evaluation 后\n- 所有模型（如NCF、LightGCN、Caser等）均提供标准化Python封装接口，一键运行，无需手动修复依赖或调整结构。\n- 所有评估指标已修正历史错误（如NDCG在随机留出法下的偏差），确保结果真实可比，团队信心大幅提升。\n- 提供统一的超参数配置表与完整实验结果，团队直接复用基准，两周内完成7个模型的横向对比。\n- 通过工具内置的消融实验模块，团队发现某“创新CNN模型”实际性能与传统矩阵分解无异，果断放弃无效方向，节省3人月开发成本。\n- 团队将评估流程标准化，作为内部模型准入的强制环节，新模型必须通过该工具验证才能上线。\n\nRecSys2019_DeepLearning_Evaluation 让推荐系统研发从“玄学复现”回归科学实验，真正实现了公平、可复现、高效率的模型选型。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMaurizioFD_RecSys2019_DeepLearning_Evaluation_eb8210e0.png","MaurizioFD","Maurizio Ferrari Dacrema","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMaurizioFD_30913b4e.jpg","Assistant Professor, recommender systems evaluation and applied quantum machine learning. Twitter @Maurizio_fd","Politecnico di Milano","Milano, Italy","maurizio.ferrari@polimi.it","Maurizio_fd","https:\u002F\u002Fmauriziofd.github.io\u002F","https:\u002F\u002Fgithub.com\u002FMaurizioFD",[86,90,94,98,102,106,110,113],{"name":87,"color":88,"percentage":89},"Python","#3572A5",75.2,{"name":91,"color":92,"percentage":93},"Jupyter Notebook","#DA5B0B",18.1,{"name":95,"color":96,"percentage":97},"Cython","#fedf5b",3.4,{"name":99,"color":100,"percentage":101},"MATLAB","#e16737",2.1,{"name":103,"color":104,"percentage":105},"C++","#f34b7d",0.9,{"name":107,"color":108,"percentage":109},"C","#555555",0.1,{"name":111,"color":112,"percentage":109},"Shell","#89e051",{"name":114,"color":115,"percentage":116},"Makefile","#427819",0,987,252,"2026-04-01T14:55:17","AGPL-3.0",4,"Linux, macOS, Windows","未说明",{"notes":125,"python":126,"dependencies":127},"建议使用 virtualenv 或 conda 创建独立环境；需安装 gcc 和 python3-dev 编译 Cython 模块；ConvNCF 数据集需手动下载并解压至指定路径；部分数据集自动下载失败时需手动获取。","3.6",[128,129,130,131,132,133,134,135],"numpy","scipy","scikit-learn","pandas","cython","tqdm","hyperopt","joblib",[13,54],[138,139,140,141,142,143,144,145,146,147,148,149,150,151,152,153,154,155,156,157],"recommender-system","recommendation-system","recommendation-algorithms","deep-learning","evaluation-framework","neural-network","collaborative-filtering","content-based-recommendation","hybrid-recommender-system","reproducibility","reproducible-research","knn","matrix-factorization","matrix-completion","bpr","bprmf","bprslim","funksvd","slimelasticnet","hyperparameters",null,"2026-03-27T02:49:30.150509","2026-04-06T06:45:23.711215",[162,167,172,177,182,187,191,195],{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},9277,"在 Windows 上运行时，Cython 编译失败怎么办？","可以跳过 Cython 编译步骤，系统会自动使用 Python 版本的相似度计算实现，虽然速度稍慢但功能完整。无需安装 gcc 或 python3-dev，直接运行代码即可。如果仍需编译，请参考 Cython 官方文档配置 Windows 环境，但编译后的文件不可跨平台迁移。","https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation\u002Fissues\u002F6",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},9278,"HitRate 的计算公式是什么？它是否可能大于 1？","HitRate 计算的是每个用户在 Top-N 推荐列表中正确推荐项的平均数量，因此在多个真实正样本存在时，其值可以大于 1。它等价于 precision@K × K，适用于随机留出评估（random holdout），而非仅单个正样本的留一法（leave-one-out）。","https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation\u002Fissues\u002F19",{"id":173,"question_zh":174,"answer_zh":175,"source_url":176},9279,"如何为冷启动物品构建数据集？","可以直接创建新的稀疏矩阵，无需使用 IncrementalSparseMatrix。例如，通过 URM_train.col \u003C int(URM_train.shape[1] * train_percentage) 生成训练掩码，确保使用 COO 格式的一致性。建议随机选择冷启动物品而非按索引顺序，避免时间偏差影响实验结果。","https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation\u002Fissues\u002F11",{"id":178,"question_zh":179,"answer_zh":180,"source_url":181},9280,"MultVAE 中的 update_count 是否应在每个 epoch 重置？","是的，代码中 update_count 在每个 epoch 开始时重置为 0，这是有意设计。虽然原始论文中该变量跨 epoch 累积，但本实现使用早停机制稳定模型性能，且在隐式反馈场景下（所有评分均为 0 或 1），该设计不影响最终效果。","https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation\u002Fissues\u002F2",{"id":183,"question_zh":184,"answer_zh":185,"source_url":186},9281,"数据集的格式要求是什么？","数据需编码为 scipy 稀疏矩阵，形状为 |用户数| × |物品数|，支持 CSR、CSC、COO 任意格式，代码会自动检测和转换。可在 Data_manager 文件夹中查看常见数据集的读取示例，如 MovieLens 或 Amazon 数据集的格式。","https:\u002F\u002Fgithub.com\u002FMaurizioFD\u002FRecSys2019_DeepLearning_Evaluation\u002Fissues\u002F12",{"id":188,"question_zh":189,"answer_zh":190,"source_url":186},9282,"为什么 run_WWW_17_NeuMF.py 在第 6 个 epoch 停滞不前？","这不是硬件问题，而是训练过程本身可能因数据规模或学习率导致收敛缓慢。建议检查是否启用了 GPU（如 GTX 1070），确认 CUDA 和 cuDNN 配置正确，并尝试降低批量大小或学习率以加速训练。若仍缓慢，可尝试使用更小的数据集进行调试。",{"id":192,"question_zh":193,"answer_zh":194,"source_url":181},9283,"P3alpha 中用户画像是否需要提升到 alpha 次幂？","在当前实现中，由于所有数据均为隐式反馈（值为 0 或 1），用户画像提升到 alpha 次幂不会改变数值结果，因此代码未显式实现。若使用显式评分数据，则需对 user_profile_array 应用 power(alpha) 操作，但对隐式数据无影响。",{"id":196,"question_zh":197,"answer_zh":198,"source_url":166},9284,"能否在 Linux 上编译 Cython 文件后移植到 Windows？","不可以。编译生成的 .pyd 或 .so 文件是非可移植的，依赖于操作系统和 Python 环境。必须在目标系统（Windows）上重新编译，或直接跳过编译使用纯 Python 实现。",[]]