[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-qiskit-community--qiskit-machine-learning":3,"tool-qiskit-community--qiskit-machine-learning":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":76,"owner_avatar_url":77,"owner_bio":78,"owner_company":79,"owner_location":79,"owner_email":79,"owner_twitter":79,"owner_website":79,"owner_url":80,"languages":81,"stars":94,"forks":95,"last_commit_at":96,"license":97,"difficulty_score":23,"env_os":98,"env_gpu":99,"env_ram":99,"env_deps":100,"category_tags":110,"github_topics":111,"view_count":23,"oss_zip_url":79,"oss_zip_packed_at":79,"status":16,"created_at":114,"updated_at":115,"faqs":116,"releases":145},3013,"qiskit-community\u002Fqiskit-machine-learning","qiskit-machine-learning","An open-source library built on Qiskit for quantum machine learning tasks at scale on quantum hardware and classical simulators","qiskit-machine-learning 是一个基于 Qiskit 构建的开源库，旨在帮助用户在量子硬件和经典模拟器上大规模执行量子机器学习任务。它主要解决了传统机器学习难以处理的复杂数据模式识别问题，通过引入量子计算特有的优势，为分类和回归等应用场景提供新的解决方案。\n\n该工具非常适合量子计算初学者、研究人员以及希望探索前沿算法的开发者使用。即使没有深厚的量子物理背景，用户也能利用其友好的接口快速原型化量子模型；同时，其灵活的架构也满足了专家进行创新研究的需求。\n\nqiskit-machine-learning 的核心亮点在于提供了关键的计算模块，如“量子核（Quantum Kernels）”和“量子神经网络”。特别是其基于保真度（Fidelity）的量子核方法，能够高效计算数据集的核矩阵，并支持与量子支持向量分类器（QSVC）及回归器（QSVR）无缝结合。作为 Qiskit 社区生态的重要组成部分，它不仅易于上手，还具备高度的可扩展性，方便集成最新的量子算法特性，是连接经典机器学习与量子计算潜力的理想桥梁。","# Qiskit Machine Learning\n\n[![License](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fqiskit-community\u002Fqiskit-machine-learning.svg?)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0) \u003C!--- long-description-skip-begin -->\n[![Current Release](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fqiskit-community\u002Fqiskit-machine-learning.svg?logo=Qiskit)](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Freleases)\n[![Build Status](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Factions\u002Fworkflows\u002Fmain.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Factions?query=workflow%3A\"Machine%20Learning%20Unit%20Tests\"+branch%3Amain+event%3Apush)\n[![Coverage Status](https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fbadge.svg?branch=main)](https:\u002F\u002Fcoveralls.io\u002Fgithub\u002Fqiskit-community\u002Fqiskit-machine-learning?branch=main)\n![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fqiskit-machine-learning)\n[![Monthly downloads](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fqiskit-machine-learning.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fqiskit-machine-learning\u002F)\n[![Total downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqiskit-community_qiskit-machine-learning_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fqiskit-machine-learning)\n[![Slack Organisation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-blueviolet.svg?label=Qiskit%20Slack&logo=slack)](https:\u002F\u002Fslack.qiskit.org)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.17756-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGitHub%20Pages-Documentation-blue.svg)](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002F)\n\n\u003C!--- long-description-skip-end -->\n\n## What is Qiskit Machine Learning?\n\nQiskit Machine Learning introduces fundamental computational building blocks, such as Quantum \nKernels and Quantum Neural Networks, used in various applications including classification \nand regression.\n\nThis library is part of the Qiskit Community ecosystem, a collection of high-level libraries that are based\non the Qiskit software development kit. As of version `0.7`, Qiskit Machine Learning is co-maintained\nby IBM and the [Hartree Centre](https:\u002F\u002Fwww.hartree.stfc.ac.uk\u002F), part of the UK Science and \nTechnologies Facilities Council (STFC).\n\n> [!NOTE]\n> A description of the library structure, features, and domain-specific applications, can be found \n> in a dedicated [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.17756-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756)\n> paper. For more details on usage and the API, refer to the [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-blue.svg)](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002F).\n\nThe Qiskit Machine Learning framework aims to be:\n\n* **User-friendly**, allowing users to quickly and easily prototype quantum machine learning models without \n    the need of extensive quantum computing knowledge.\n* **Flexible**, providing tools and functionalities to conduct proofs-of-concept and innovative research \n    in quantum machine learning for both beginners and experts.\n* **Extensible**, facilitating the integration of new cutting-edge features leveraging Qiskit's \n    architectures, patterns and related services.\n\n\n## What are the main features of Qiskit Machine Learning?\n\n### Kernel-based methods\n\nThe [`FidelityQuantumKernel`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.FidelityQuantumKernel) \nclass uses the [`Fidelity`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.state_fidelities.BaseStateFidelity.html) \nalgorithm. It computes kernel matrices for datasets and can be combined with a Quantum Support Vector Classifier ([`QSVC`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVC.html#qiskit_machine_learning.algorithms.QSVC)) \nor a Quantum Support Vector Regressor ([`QSVR`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVR.html#qiskit_machine_learning.algorithms.QSVR)) \nto solve classification or regression problems respectively. It is also compatible with classical kernel-based machine learning algorithms.\n\n\n### Quantum Neural Networks (QNNs)\n\nQiskit Machine Learning defines a generic interface for neural networks, implemented by two core (derived) primitives:\n\n- **[`EstimatorQNN`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html):** Leverages the [`Estimator`](https:\u002F\u002Fquantum.cloud.ibm.com\u002Fdocs\u002Fapi\u002Fqiskit\u002F1.4\u002Fqiskit.primitives.BaseEstimator) primitive, combining parametrized quantum circuits with quantum mechanical observables. The output is the expected value of the observable.\n  \n- **[`SamplerQNN`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.SamplerQNN.html):** Leverages the [`Sampler`](https:\u002F\u002Fquantum.cloud.ibm.com\u002Fdocs\u002Fapi\u002Fqiskit\u002F1.4\u002Fqiskit.primitives.BaseSampler) primitive, translating bit-string counts into the desired outputs.\n\nTo train and use neural networks, Qiskit Machine Learning provides learning algorithms such as the [`NeuralNetworkClassifier`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkClassifier.html#qiskit_machine_learning.algorithms.NeuralNetworkClassifier) \nand [`NeuralNetworkRegressor`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkRegressor.html#qiskit_machine_learning.algorithms.NeuralNetworkRegressor). \nFinally, built on these, the Variational Quantum Classifier ([`VQC`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQC.html#qiskit_machine_learning.algorithms.VQC))\nand the Variational Quantum Regressor ([`VQR`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQR.html#qiskit_machine_learning.algorithms.VQR))\ntake a _feature map_ and an _ansatz_ to construct the underlying QNN automatically using high-level syntax.\n\n### Integration with PyTorch\n\nThe [`TorchConnector`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.connectors.TorchConnector.html#qiskit_machine_learning.connectors.TorchConnector) \nintegrates QNNs with [PyTorch](https:\u002F\u002Fpytorch.org). \nThanks to the gradient algorithms in Qiskit Machine Learning, this includes automatic differentiation. \nThe overall gradients computed by PyTorch during the backpropagation take into account quantum neural \nnetworks, too. The flexible design also allows the building of connectors to other packages or accelerated\nlibraries.\n\n## Installation and documentation\n\nWe encourage installing Qiskit Machine Learning via the `pip` tool, a `Python` package manager.\n\n```bash\npip install qiskit-machine-learning\n```\n\n`pip` will install all dependencies automatically, so that you will always have the most recent\nstable version.\n\nIf you want to work instead on the very latest _work-in-progress_ versions of Qiskit Machine Learning, \neither to try features ahead of\ntheir official release or if you want to contribute to the library, then you can install from source.\nFor more details on how to do so and much more, follow the instructions in the\n [documentation](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fgetting_started.html#installation).\n\n### Optional Installs\n\n* **PyTorch** may be installed either using command `pip install 'qiskit-machine-learning[torch]'` to install the\n  package or refer to PyTorch [getting started](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F) guide. When PyTorch\n  is installed, the `TorchConnector` facilitates the combination of hybrid quantum-classical neural networks.\n\n* **Sparse** may be installed using command `pip install 'qiskit-machine-learning[sparse]'` to install the\n  package. [Sparse](https:\u002F\u002Fsparse.pydata.org\u002Fen\u002Flatest\u002F) is built on top of NumPy and `scipy.sparse`, and enables\n  efficient operations of sparse arrays and tensors. Refer to the Sparse [installation guide](https:\u002F\u002Fsparse.pydata.org\u002Fen\u002Flatest\u002Finstall\u002F)\n  for further details.\n\n* **NLopt** is required for the global optimizers. [`NLopt`](https:\u002F\u002Fnlopt.readthedocs.io\u002Fen\u002Flatest\u002F) \n  can be installed manually with `pip install nlopt` on Windows and Linux platforms, or with `brew \n  install nlopt` on MacOS using the Homebrew package manager. For more information, \n  refer to the [installation guide](https:\u002F\u002Fnlopt.readthedocs.io\u002Fen\u002Flatest\u002FNLopt_Installation\u002F).\n\n----------------------------------------------------------------------------------------------------\n\n### Creating your first Qiskit Machine Learning program\n\nNow that Qiskit Machine Learning is installed, it's time to begin working with the machine \nlearning modules. Let's try an experiment using VQC (Variational Quantum Classifier) algorithm to\ntrain and test samples from a data set to see how accurately the test set can be classified.\n\n```python\nfrom qiskit.circuit.library import n_local, zz_feature_map\nfrom qiskit_machine_learning.optimizers import COBYLA\nfrom qiskit_machine_learning.utils import algorithm_globals\n\nfrom qiskit_machine_learning.algorithms import VQC\nfrom qiskit_machine_learning.datasets import ad_hoc_data\n\nseed = 1376\nalgorithm_globals.random_seed = seed\n\n# Use ad hoc data set for training and test data\nfeature_dim = 2  # dimension of each data point\ntraining_size = 20\ntest_size = 10\n\n# training features, training labels, test features, test labels as np.ndarray,\n# one hot encoding for labels\ntraining_features, training_labels, test_features, test_labels = ad_hoc_data(\n    training_size=training_size, test_size=test_size, n=feature_dim, gap=0.3\n)\n\nfeature_map = zz_feature_map(feature_dimension=feature_dim, reps=2, entanglement=\"linear\")\nansatz = n_local(feature_map.num_qubits, [\"ry\", \"rz\"], \"cz\", reps=3)\nvqc = VQC(\n    feature_map=feature_map,\n    ansatz=ansatz,\n    optimizer=COBYLA(maxiter=100),\n)\nvqc.fit(training_features, training_labels)\n\nscore = vqc.score(test_features, test_labels)\nprint(f\"Testing accuracy: {score:0.2f}\")\n```\n\n### More examples\n\nLearning materials can be found in the\n[Tutorials](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Ftutorials\u002Findex.html) section\nof the documentation. These notebooks will walk you step by step through different tasks and are designed to be hackable,\nmaking them a great place to start.\n\nAnother good place to learn the fundamentals of quantum machine learning is the\n[Quantum Machine Learning](https:\u002F\u002Fgithub.com\u002FQiskit\u002Ftextbook\u002Ftree\u002Fmain\u002Fnotebooks\u002Fquantum-machine-learning#) notebooks from the original Qiskit Textbook (now archived). \nThe notebooks are convenient for beginners who are eager to learn \nquantum machine learning from scratch, as well as understand the background and theory behind algorithms in\nQiskit Machine Learning. The notebooks cover a variety of topics to build an understanding of parameterized\ncircuits, data encoding, variational algorithms and more, with the ultimate goal of building and training quantum ML models \nfor supervised and unsupervised learning. \nThe Textbook notebooks are complementary to the tutorials of this library. These tutorials focus emphasize the algorithms, \nwhile the Textbook notebooks explain in more detail the underlying fundamental quantum information principles\nof quantum machine learning.\n\n----------------------------------------------------------------------------------------------------\n\n## How can I contribute?\n\nIf you'd like to contribute to Qiskit, please take a look at our\n[contribution guidelines](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FCONTRIBUTING.md).\nThis project adheres to the Qiskit [code of conduct](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FCODE_OF_CONDUCT.md).\nBy participating, you are expected to uphold this code.\n\nWe use [GitHub issues](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues) for tracking requests and bugs. Please\n[join the Qiskit Slack community](https:\u002F\u002Fqisk.it\u002Fjoin-slack)\nand use the [`#qiskit-machine-learning`](https:\u002F\u002Fqiskit.enterprise.slack.com\u002Farchives\u002FC07JE3V55C1) \nchannel for discussions and short questions.\nFor questions that are more suited for a forum, you can use the **Qiskit** tag in [Stack Overflow](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fqiskit).\n\n## How can I cite Qiskit Machine Learning?\n\nIf you use Qiskit Machine Learning in your work, please cite the \"overview\" [ArXiv paper](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756) to \nsupport the continued development and visibility of the library. The BibTeX citation handle can be found in the \n[`CITATION.bib`](.\u002FCITATION.bib) file.\n\n## Humans behind Qiskit Machine Learning\n\nQiskit Machine Learning was inspired, authored and brought about by the collective work of a \nteam of researchers  and software engineers. This library continues to grow with the help and \nwork of \n[many people](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fgraphs\u002Fcontributors), \nwho contribute to the project at different levels.\n\n## License\n\nThis project uses the [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FLICENSE.txt).\n","# Qiskit 机器学习\n\n[![许可证](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flicense\u002Fqiskit-community\u002Fqiskit-machine-learning.svg?)](https:\u002F\u002Fopensource.org\u002Flicenses\u002FApache-2.0) \u003C!--- long-description-skip-begin -->\n[![当前版本](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Frelease\u002Fqiskit-community\u002Fqiskit-machine-learning.svg?logo=Qiskit)](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Freleases)\n[![构建状态](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Factions\u002Fworkflows\u002Fmain.yml\u002Fbadge.svg)](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Factions?query=workflow%3A\"Machine%20Learning%20Unit%20Tests\"+branch%3Amain+event%3Apush)\n[![覆盖率](https:\u002F\u002Fcoveralls.io\u002Frepos\u002Fgithub\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fbadge.svg?branch=main)](https:\u002F\u002Fcoveralls.io\u002Fgithub\u002Fqiskit-community\u002Fqiskit-machine-learning?branch=main)\n![PyPI - Python 版本](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Fqiskit-machine-learning)\n[![月下载量](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fdm\u002Fqiskit-machine-learning.svg)](https:\u002F\u002Fpypi.org\u002Fproject\u002Fqiskit-machine-learning\u002F)\n[![总下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqiskit-community_qiskit-machine-learning_readme_f617c87ed07f.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Fqiskit-machine-learning)\n[![Slack 组织](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fslack-chat-blueviolet.svg?label=Qiskit%20Slack&logo=slack)](https:\u002F\u002Fslack.qiskit.org)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.17756-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756)\n[![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FGitHub%20Pages-Documentation-blue.svg)](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002F)\n\n\u003C!--- long-description-skip-end -->\n\n## 什么是 Qiskit 机器学习？\n\nQiskit 机器学习引入了量子核函数和量子神经网络等基础计算模块，这些模块可用于分类、回归等多种应用场景。\n\n该库是 Qiskit 社区生态系统的一部分，该生态系统由基于 Qiskit 软件开发工具包的高级库组成。自 `0.7` 版本起，Qiskit 机器学习由 IBM 和英国科学技术设施委员会 (STFC) 下属的 [哈特里中心](https:\u002F\u002Fwww.hartree.stfc.ac.uk\u002F) 共同维护。\n\n> [!注意]\n> 关于库的结构、功能及领域特定应用的详细描述，可在专门的 [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FarXiv-2505.17756-b31b1b.svg)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756) 论文中找到。更多使用说明和 API 详情，请参阅 [![arXiv](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FDocumentation-blue.svg)](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002F)。\n\nQiskit 机器学习框架旨在：\n\n* **用户友好**，使用户无需具备深厚的量子计算知识即可快速轻松地原型化量子机器学习模型。\n* **灵活**，为初学者和专家提供工具和功能，以进行量子机器学习的概念验证和创新研究。\n* **可扩展**，便于集成利用 Qiskit 架构、模式及相关服务的新颖前沿功能。\n\n\n## Qiskit 机器学习的主要特性有哪些？\n\n### 基于核的方法\n\n[`FidelityQuantumKernel`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.FidelityQuantumKernel) 类使用 [`Fidelity`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.state_fidelities.BaseStateFidelity.html) 算法。它可为数据集计算核矩阵，并可与量子支持向量分类器 ([`QSVC`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVC.html#qiskit_machine_learning.algorithms.QSVC)) 或量子支持向量回归器 ([`QSVR`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVR.html#qiskit_machine_learning.algorithms.QSVR)) 结合，分别解决分类或回归问题。它也兼容经典的基于核的机器学习算法。\n\n\n### 量子神经网络 (QNNs)\n\nQiskit 机器学习定义了一个通用的神经网络接口，由两个核心（派生）基元实现：\n\n- **[`EstimatorQNN`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html):** 利用 [`Estimator`](https:\u002F\u002Fquantum.cloud.ibm.com\u002Fdocs\u002Fapi\u002Fqiskit\u002F1.4\u002Fqiskit.primitives.BaseEstimator) 基元，将参数化的量子电路与量子力学可观测量相结合。输出为该可观测量的期望值。\n  \n- **[`SamplerQNN`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.SamplerQNN.html):** 利用 [`Sampler`](https:\u002F\u002Fquantum.cloud.ibm.com\u002Fdocs\u002Fapi\u002Fqiskit\u002F1.4\u002Fqiskit.primitives.BaseSampler) 基元，将比特串计数转换为所需的输出。\n\n为了训练和使用神经网络，Qiskit 机器学习提供了诸如 [`NeuralNetworkClassifier`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkClassifier.html#qiskit_machine_learning.algorithms.NeuralNetworkClassifier) 和 [`NeuralNetworkRegressor`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkRegressor.html#qiskit_machine_learning.algorithms.NeuralNetworkRegressor) 等学习算法。最后，在此基础上，变分量子分类器 ([`VQC`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQC.html#qiskit_machine_learning.algorithms.VQC)) 和变分量子回归器 ([`VQR`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQR.html#qiskit_machine_learning.algorithms.VQR)) 可以通过一个 _特征映射_ 和一个 _试探波函数_，使用高层语法自动构建底层的 QNN。\n\n### 与 PyTorch 的集成\n\n[`TorchConnector`](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.connectors.TorchConnector.html#qiskit_machine_learning.connectors.TorchConnector) 将 QNNs 与 [PyTorch](https:\u002F\u002Fpytorch.org) 集成。借助 Qiskit 机器学习中的梯度算法，这包括自动微分。PyTorch 在反向传播过程中计算的总体梯度也会考虑量子神经网络。其灵活的设计还允许构建与其他软件包或加速库的连接器。\n\n## 安装与文档\n\n我们建议通过 `pip` 工具（Python 包管理器）来安装 Qiskit 机器学习库。\n\n```bash\npip install qiskit-machine-learning\n```\n\n`pip` 会自动安装所有依赖项，确保您始终使用最新且稳定的版本。\n\n如果您希望使用 Qiskit 机器学习的最新开发版本，无论是为了提前体验尚未正式发布的功能，还是为了参与库的贡献，您可以从源代码进行安装。有关详细步骤及其他信息，请参阅 [文档](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fgetting_started.html#installation) 中的说明。\n\n### 可选安装\n\n* **PyTorch** 可以通过命令 `pip install 'qiskit-machine-learning[torch]'` 进行安装，也可以参考 PyTorch 的 [入门指南](https:\u002F\u002Fpytorch.org\u002Fget-started\u002Flocally\u002F)。安装 PyTorch 后，`TorchConnector` 可以方便地构建混合量子-经典神经网络。\n\n* **Sparse** 可以通过命令 `pip install 'qiskit-machine-learning[sparse]'` 进行安装。[Sparse](https:\u002F\u002Fsparse.pydata.org\u002Fen\u002Flatest\u002F) 基于 NumPy 和 `scipy.sparse` 构建，支持高效的稀疏数组和张量操作。更多详情请参阅 Sparse 的 [安装指南](https:\u002F\u002Fsparse.pydata.org\u002Fen\u002Flatest\u002Finstall\u002F)。\n\n* 全局优化器需要使用 **NLopt**。[`NLopt`](https:\u002F\u002Fnlopt.readthedocs.io\u002Fen\u002Flatest\u002F) 可以在 Windows 和 Linux 平台上通过 `pip install nlopt` 手动安装，或在 macOS 上使用 Homebrew 包管理器运行 `brew install nlopt`。更多信息请参阅 [安装指南](https:\u002F\u002Fnlopt.readthedocs.io\u002Fen\u002Flatest\u002FNLopt_Installation\u002F)。\n\n----------------------------------------------------------------------------------------------------\n\n### 创建您的第一个 Qiskit 机器学习程序\n\n现在 Qiskit 机器学习已经安装完毕，是时候开始使用其机器学习模块了。让我们尝试一个实验，使用 VQC（变分量子分类器）算法对数据集中的样本进行训练和测试，以查看测试集的分类准确率。\n\n```python\nfrom qiskit.circuit.library import n_local, zz_feature_map\nfrom qiskit_machine_learning.optimizers import COBYLA\nfrom qiskit_machine_learning.utils import algorithm_globals\n\nfrom qiskit_machine_learning.algorithms import VQC\nfrom qiskit_machine_learning.datasets import ad_hoc_data\n\nseed = 1376\nalgorithm_globals.random_seed = seed\n\n# 使用 ad hoc 数据集作为训练和测试数据\nfeature_dim = 2  # 每个数据点的维度\ntraining_size = 20\ntest_size = 10\n\n# 训练特征、训练标签、测试特征、测试标签为 np.ndarray 格式，\n# 标签采用 one-hot 编码\ntraining_features, training_labels, test_features, test_labels = ad_hoc_data(\n    training_size=training_size, test_size=test_size, n=feature_dim, gap=0.3\n)\n\nfeature_map = zz_feature_map(feature_dimension=feature_dim, reps=2, entanglement=\"linear\")\nansatz = n_local(feature_map.num_qubits, [\"ry\", \"rz\"], \"cz\", reps=3)\nvqc = VQC(\n    feature_map=feature_map,\n    ansatz=ansatz,\n    optimizer=COBYLA(maxiter=100),\n)\nvqc.fit(training_features, training_labels)\n\nscore = vqc.score(test_features, test_labels)\nprint(f\"测试准确率: {score:0.2f}\")\n```\n\n### 更多示例\n\n您可以在文档的 [教程](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Ftutorials\u002Findex.html) 部分找到学习材料。这些笔记本将逐步引导您完成不同的任务，并且可以自由修改，是非常好的入门资源。\n\n另一个学习量子机器学习基础知识的好地方是原始 Qiskit 教材（现已归档）中的 [量子机器学习](https:\u002F\u002Fgithub.com\u002FQiskit\u002Ftextbook\u002Ftree\u002Fmain\u002Fnotebooks\u002Fquantum-machine-learning#) 笔记本。这些笔记本非常适合初学者，帮助他们从零开始学习量子机器学习，并理解 Qiskit 机器学习中各种算法背后的背景和理论。这些笔记本涵盖了参数化电路、数据编码、变分算法等多个主题，最终目标是构建和训练用于监督和无监督学习的量子机器学习模型。这些教材笔记本与本库的教程相辅相成：教程侧重于算法本身，而教材笔记本则更深入地解释量子机器学习背后的量子信息学基础原理。\n\n----------------------------------------------------------------------------------------------------\n\n## 如何参与贡献？\n\n如果您希望为 Qiskit 贡献代码，请查看我们的 [贡献指南](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FCONTRIBUTING.md)。本项目遵循 Qiskit 的 [行为准则](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FCODE_OF_CONDUCT.md)。参与时，请您遵守该准则。\n\n我们使用 [GitHub Issues](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues) 来跟踪请求和错误。请加入 [Qiskit Slack 社区](https:\u002F\u002Fqisk.it\u002Fjoin-slack)，并在 [`#qiskit-machine-learning`](https:\u002F\u002Fqiskit.enterprise.slack.com\u002Farchives\u002FC07JE3V55C1) 频道中讨论问题或提出简短疑问。对于更适合论坛的问题，您可以在 [Stack Overflow](https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002Ftagged\u002Fqiskit) 上使用 `Qiskit` 标签提问。\n\n## 如何引用 Qiskit 机器学习？\n\n如果您在工作中使用了 Qiskit 机器学习，请引用“概述”版的 [ArXiv 论文](https:\u002F\u002Farxiv.org\u002Fabs\u002F2505.17756)，以支持该库的持续发展和知名度。BibTeX 引用格式可在 [`CITATION.bib`](.\u002FCITATION.bib) 文件中找到。\n\n## Qiskit 机器学习的背后团队\n\nQiskit 机器学习的灵感来源于研究人员和软件工程师的集体努力，由他们共同设计并实现。该库在众多贡献者的帮助下不断发展壮大，这些贡献者来自不同的领域，共同推动着项目的进步。\n\n## 许可证\n\n本项目采用 [Apache License 2.0](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fmain\u002FLICENSE.txt) 许可证。","# Qiskit Machine Learning 快速上手指南\n\nQiskit Machine Learning 是 Qiskit 生态系统的一部分，提供了量子核（Quantum Kernels）和量子神经网络（QNNs）等基础构建模块，用于解决分类和回归问题。本指南将帮助你快速搭建环境并运行第一个量子机器学习程序。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下要求：\n\n*   **操作系统**：Windows、macOS 或 Linux。\n*   **Python 版本**：支持 Python 3.8 至 3.12（建议使用最新稳定版）。\n*   **前置依赖**：\n    *   `pip`：Python 包管理工具。\n    *   （可选）**PyTorch**：若需构建混合量子 - 经典神经网络，建议预先安装 PyTorch。\n    *   （可选）**NLopt**：若需使用全局优化器，需在系统层面安装（Linux\u002FWindows 使用 `pip install nlopt`，macOS 使用 `brew install nlopt`）。\n\n> **国内开发者提示**：为避免网络延迟导致下载失败，建议在安装时使用国内镜像源（如清华大学或阿里云镜像）。\n\n## 安装步骤\n\n### 1. 基础安装\n使用 `pip` 安装最新稳定版本。推荐使用国内镜像加速：\n\n```bash\npip install qiskit-machine-learning -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 可选组件安装\n如果你需要使用 PyTorch 进行混合模型训练或处理稀疏矩阵，可以安装额外依赖：\n\n*   **集成 PyTorch**：\n    ```bash\n    pip install 'qiskit-machine-learning[torch]' -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n    *(如果上述命令未自动安装 PyTorch，请参考 PyTorch 官网的本地安装指南单独安装)*\n\n*   **支持稀疏数组 (Sparse)**：\n    ```bash\n    pip install 'qiskit-machine-learning[sparse]' -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n    ```\n\n## 基本使用\n\n以下示例演示如何使用 **变分量子分类器 (VQC)** 对数据集进行训练和测试。该示例使用了库内置的 `ad_hoc_data` 数据集。\n\n### 代码示例\n\n```python\nfrom qiskit.circuit.library import n_local, zz_feature_map\nfrom qiskit_machine_learning.optimizers import COBYLA\nfrom qiskit_machine_learning.utils import algorithm_globals\n\nfrom qiskit_machine_learning.algorithms import VQC\nfrom qiskit_machine_learning.datasets import ad_hoc_data\n\n# 设置随机种子以保证结果可复现\nseed = 1376\nalgorithm_globals.random_seed = seed\n\n# 准备数据：使用 ad_hoc 数据集\nfeature_dim = 2  # 每个数据点的维度\ntraining_size = 20\ntest_size = 10\n\n# 获取训练特征、训练标签、测试特征、测试标签 (numpy.ndarray 格式)\n# 标签采用独热编码 (one-hot encoding)\ntraining_features, training_labels, test_features, test_labels = ad_hoc_data(\n    training_size=training_size, test_size=test_size, n=feature_dim, gap=0.3\n)\n\n# 定义特征映射 (Feature Map) 和  Ansatz (变分形式)\nfeature_map = zz_feature_map(feature_dimension=feature_dim, reps=2, entanglement=\"linear\")\nansatz = n_local(feature_map.num_qubits, [\"ry\", \"rz\"], \"cz\", reps=3)\n\n# 初始化 VQC 模型\nvqc = VQC(\n    feature_map=feature_map,\n    ansatz=ansatz,\n    optimizer=COBYLA(maxiter=100), # 使用 COBYLA 优化器，最大迭代 100 次\n)\n\n# 训练模型\nvqc.fit(training_features, training_labels)\n\n# 评估模型\nscore = vqc.score(test_features, test_labels)\nprint(f\"Testing accuracy: {score:0.2f}\")\n```\n\n### 下一步\n成功运行上述代码后，你可以访问 [官方文档教程](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Ftutorials\u002Findex.html) 获取更多关于量子核方法、量子神经网络以及与 PyTorch 深度集成的进阶示例。","某金融科技团队正尝试利用量子计算优势，对高维非线性交易数据进行异常检测模型的原型验证。\n\n### 没有 qiskit-machine-learning 时\n- 研究人员需从零手动构建量子特征映射电路，并编写复杂的底层代码来计算核矩阵，开发周期长达数周。\n- 缺乏与经典机器学习框架（如 Scikit-Learn）的标准接口，导致无法直接复用现有的分类器流程，集成难度极大。\n- 在切换本地模拟器与真实量子硬件进行测试时，需要重写大量后端连接逻辑，调试过程繁琐且容易出错。\n- 团队中非量子物理背景的算法工程师难以理解底层量子态演化细节，协作沟通成本高昂。\n\n### 使用 qiskit-machine-learning 后\n- 直接调用 `FidelityQuantumKernel` 类即可自动生成核矩阵，将原本数周的电路构建工作缩短至几小时。\n- 通过内置的 `QSVC`（量子支持向量分类器）无缝对接 Scikit-Learn 接口，像调用普通模型一样完成训练与预测。\n- 凭借统一的抽象层，仅需修改一行配置代码即可在经典模拟器和 IBM 真实量子处理器之间自由切换验证。\n- 高度封装的 API 屏蔽了复杂的量子力学数学推导，让传统数据科学家也能快速上手并专注于业务逻辑创新。\n\nqiskit-machine-learning 通过标准化的量子机器学习组件，极大地降低了从理论算法到实际原型验证的门槛，加速了量子优势在垂直领域的落地探索。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fqiskit-community_qiskit-machine-learning_9d3502af.png","qiskit-community","Qiskit Community","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fqiskit-community_0fab97b6.png","",null,"https:\u002F\u002Fgithub.com\u002Fqiskit-community",[82,86,90],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.7,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.2,{"name":91,"color":92,"percentage":93},"Shell","#89e051",0.1,954,421,"2026-04-03T02:57:46","Apache-2.0","Linux, macOS, Windows","未说明",{"notes":101,"python":102,"dependencies":103},"核心库可通过 pip 直接安装。PyTorch 集成（用于混合量子 - 经典神经网络）为可选依赖，需单独安装 'qiskit-machine-learning[torch]'。Sparse 库用于稀疏数组高效运算，为可选依赖。NLopt 用于全局优化器，在 Windows\u002FLinux 上通过 pip 安装，macOS 上建议通过 Homebrew 安装。","3.8+",[104,105,106,107,108,109],"qiskit","scipy","numpy","torch (可选)","sparse (可选)","nlopt (可选)",[13],[104,112,113],"machine-learning","quantum-computing","2026-03-27T02:49:30.150509","2026-04-06T07:05:51.761361",[117,122,127,132,136,141],{"id":118,"question_zh":119,"answer_zh":120,"source_url":121},13887,"如何在 Qiskit Machine Learning 中使用 VQC 进行多分类任务？","当样本量较大或特征较多时，可以尝试以下方法：1. 使用 `warm_start` 参数将数据集分批，依次对每个批次进行训练（第一个批次除外需设置 `warm_start=True`），但需注意该功能近期可能存在问题。2. 在训练前应用 PCA 降维，但这可能会降低模型性能。3. 尝试使用振幅编码（Amplitude Encoding），例如使用 `RawFeatureVector` 将 N 个经典特征加载到 log_2(N) 个量子比特中，但要注意如果特征数量很大，生成的电路深度会非常深。特征映射的选择至关重要，没有通用的最佳方案。","https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues\u002F251",{"id":123,"question_zh":124,"answer_zh":125,"source_url":126},13888,"在干净的环境中运行测试时，test_qsvr 或 test_change_kernel 失败怎么办？","这通常与 Qiskit 版本安装方式有关。如果您使用的是 Qiskit 1.0 或更高版本，不能简单地通过 `pip install -U qiskit` 升级。您需要创建一个新的干净虚拟环境，并在其中直接使用 `pip install 'qiskit>=1'` 安装 Qiskit 1.0+。对于其他包管理器，请参考 Qiskit 1.0 安装指南。此外，升级到 qiskit-machine-learning 0.7.2 和 qiskit 1.1.0rc1 及以上版本通常可以解决此类测试断言误差问题。","https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues\u002F726",{"id":128,"question_zh":129,"answer_zh":130,"source_url":131},13889,"test_qsvr.py 和 test_fidelity_quantum_kernel_qsvr.py 这两个测试文件为什么几乎一样？","这两个测试文件确实非常相似，主要区别在于断言精度以及在 test_qsvr.py 中显式强制核矩阵为正半定。这种情况可能是由于原始代码结构或 Primitive V2 升级前的遗留问题导致的。目前维护者正在确认这是否为预期行为，如果是冗余测试，未来可能会进行清理或合并。","https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues\u002F434",{"id":133,"question_zh":134,"answer_zh":135,"source_url":131},13890,"QSVR 的单元测试为什么使用分类数据集而不是回归数据集？","这是一个已知的问题。目前的 QSVR 单元测试（TestQSVR）确实基于分类数据集（例如标签为 [0, 0, 1, 1]），这对于测试回归器来说并不理想。社区已提出改进计划，旨在修复此问题并使用更适合回归任务的数据集来实现单元测试。",{"id":137,"question_zh":138,"answer_zh":139,"source_url":140},13891,"是否可以在拟合过程中跳过转译（transpile）步骤以加快模拟速度？","目前 Qiskit Machine Learning 在模拟时默认包含转译步骤。虽然用户提出了添加标志位使转译可选的需求（特别是当没有特定设备目标时），但该功能尚未完全实现或合并。如果遇到性能瓶颈，建议关注相关 Issue 的进展，或者检查是否有针对特定模拟器优化的变通方法。","https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fissues\u002F567",{"id":142,"question_zh":143,"answer_zh":144,"source_url":121},13892,"在使用 VQC 进行多分类时，损失函数值为 NaN 且权重不更新是什么原因？","这通常与数据规模、特征维度或编码方式有关。当类别数增加（如从 2 类变为 4 类）时，如果特征映射或 Ansatz 选择不当，可能导致梯度消失或数值不稳定。建议尝试：1. 减少输入特征数量（如使用 PCA）。2. 更换特征映射（Feature Map）或 Ansatz 结构。3. 检查数据归一化情况。4. 尝试不同的优化器或调整学习率。使用 `RawFeatureVector` 进行振幅编码也是一种思路，但需注意电路深度问题。",[146,151,156,161,166,171,176,181,186,191,196,201,206,211,216,221,226,231],{"id":147,"version":148,"summary_zh":149,"released_at":150},72847,"0.9.0","# Qiskit 机器学习 0.9.0\n\n此版本主要是兼容性和迁移版本，将 Qiskit 机器学习推进到 Qiskit 2.0 \u002F V2 原语生态系统中，同时提供了 API 增强（尤其是在分类器和优化器方面），收紧了支持的 Python 版本，并减少了可选依赖项的数量。\n### 亮点\n* 迁移到 Qiskit 2.0，并将内部集成更新为较新的原语栈（例如，迁移到 V2 采样器\u002F估计器模式，并统一电路库的使用）。\n\n* 更新了 Python 支持：不再支持 Python 3.9，最低要求为 Python ≥ 3.10（并在该周期内增加了对 Python 3.13 的支持）。\n\n## 变更内容\n* 由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F847 中修复了 `README.md` 中的 StackOverflow 格式化错误。\n* 由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F844 中进行了 0.8 版本发布后的后续工作。\n* 为支持不同的原语而进行的清理和错误修复。（#55）由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F855 中完成。\n* 文档 0p8 清理由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F857 中完成。\n* 由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F861 中移除了 `fastdtw` 作为依赖项。\n* ci(mergify)：将配置升级为当前格式由 @mergify[bot] 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F860 中完成。\n* 【文档】修复目录并更新 QNN 派生原语由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F862 中完成。\n* 临时固定 Qiskit 版本为 `\u003C1.3`由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F865 中完成。\n* 为 adam-amsgrad 优化器添加了回调函数支持。由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F869 中完成。\n* 累积更新以扩展算法的 V2 支持、更新教程，并为 VQC 提供部分多分类支持。由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F870 中完成。\n* 为 PegasosQSVC 和 NeuralNetworkClassifier 添加 predict_proba 支持由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F871 中完成。\n* 恢复最新的 Qiskit 1.3+版本由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F875 中完成。\n* 扩展了对来自不同后端的不同 V2 编译器的支持。由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F879 中完成。\n* 修复 trainable_model 的回调兼容性由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F878 中完成。\n* 更新迁移指南由 @OkuyanBoga 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F884 中完成。\n* 更新 `SamplerQNN` 文档由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F886 中完成。\n* 由 @iyanmv 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-ma 中从 `requirements.txt` 文件中移除 `psutil`。","2025-12-24T09:22:47",{"id":152,"version":153,"summary_zh":154,"released_at":155},72848,"0.8.4","## 变更内容\n* 将 SciPy 固定到 `\u003C1.16` 版本，以保持在支持的 Python 版本之间的兼容性（后向移植 #964），由 @mergify[bot] 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F965 中完成\n* 修改 ValueError 消息中的 `self._num_features`（后向移植 #981），由 @mergify[bot] 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F984 中完成\n* 修正 `EstimatorQNN` 教程 01 中的输出形状（后向移植 #982），由 @mergify[bot] 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F983 中完成\n* 更新 VERSION.txt 文件，由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F990 中完成\n\n\n**完整变更日志**: https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fcompare\u002F0.8.3...0.8.4","2025-09-12T15:42:40",{"id":157,"version":158,"summary_zh":159,"released_at":160},72849,"0.8.3","## 变更内容\n* 在下一次发布中将 Qiskit 锁定到 `\u003C2` 版本（后向移植 #904），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F935 中完成\n* 修复 mypy 的 CI，并为 COO 添加显式形状（后向移植 #919），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F936 中完成\n* 从 `requirements.txt` 中移除 `psutil`（后向移植 #894），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F937 中完成\n* 尝试解决 CI 的随机失败问题（后向移植 #925），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F938 中完成\n* 修复 issue #911（后向移植 #926），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F939 中完成\n* 更新 intersphinx 映射及其他指向 IQP Classic 的 URL（后向移植 #933），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F940 中完成\n* 更新 tox.ini 中的 Qiskit 版本，由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F941 中完成\n* 修复与最新 mypy 1.16.0 兼容时的 mypy 失败问题（后向移植 #944），由 @mergify 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F955 中完成\n* [稳定版] 准备小版本发布 - 更新 VERSION.txt，由 @edoaltamura 在 https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fpull\u002F956 中完成\n\n\n**完整变更日志**：https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fcompare\u002F0.8.2...0.8.3","2025-06-16T17:28:43",{"id":162,"version":163,"summary_zh":164,"released_at":165},72850,"0.8.2","## 变更内容\n* 修复了在 `qiskit_machine_learning.algorithms.trainable_model` 基础算法中使用不同 `callback` 函数时的兼容性问题。\n* 更新了 `SamplerQNN` 的文档，改进了关于解释函数使用方法和输出形状的说明。\n* 扩展了对来自不同后端的 V2 量子电路编译器的支持。\n* 扩展了对 Qiskit 1.3.x 版本（发布时的最新版本）的支持。\n\n> [!注意]\n> 我们将继续支持派生电路类的 `BlueprintCircuit` 实现，直至 Qiskit Machine Learning 的下一个主要版本发布。","2024-12-20T15:50:21",{"id":167,"version":168,"summary_zh":169,"released_at":170},72851,"0.8.1","# 新特性\n\n- 针对 V2 原语的教程和文档得到了增强，包括一份 [V2 原语迁移指南](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fmigration\u002F02_migration_guide_0.8.html)。\n\n- 对各类量子机器学习算法中 V2 原语的支持范围进一步扩展，涵盖 [VQC](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQC.html)、[VQR](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQR.html)、[QSVC](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVC.html)、[QSVR](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QSVR.html) 以及 [QBayesian](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.QBayesian.html) 等。若未提供原语，这些算法将默认回退至使用 V1 原语；同时新增警告提示用户此默认行为。\n\n- 为 [VQC](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQC.html) 增加了部分多分类支持。当 output_shape 参数设置为 num_classes 并定义了解释函数时，该功能即被启用，从而支持多标签分类任务。\n\n- [PegasosQSVC](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.PegasosQSVC.html) 以及基于 [NeuralNetworkClassifier](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkClassifier.html) 模块派生的算法现支持 predict_proba 函数。该方法的使用方式与其他基于 scikit-learn 的算法类似。\n\n\n- [ADAM](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.optimizers.ADAM.html#qiskit_machine_learning.optimizers.ADAM) 类现支持回调函数。此功能允许用户传入自定义回调函数，在优化过程的每一步迭代中调用，并接收相关信息。传递给回调函数的信息包括当前迭代步数、参数值及目标函数值。回调函数的类型应为 `Callable[[int, Union[float, np.ndarray], float], None]`。回调函数示例如下：\n\n```python\ndef callback(iteration:int, weights:np.ndarray, loss:float):\n  ...\n  acc = calculate_accuracy(weights)\n  print(acc)\n  print(loss)\n  ...\n```","2024-12-09T11:51:05",{"id":172,"version":173,"summary_zh":174,"released_at":175},72852,"0.8.0","### 更改日志前言\n\n自本版本起，Qiskit Machine Learning 需要 Qiskit `1.0` 或更高版本。此次更新包含多项重要变更和升级，例如引入量子贝叶斯推理，并将 Qiskit Algorithms 中的部分功能迁移到 Qiskit Machine Learning 中。这些变化旨在实现与从 Qiskit Machine Learning `0.8` 版本开始提供的版本 2（V2）原语的完全兼容性。V1 原语已被弃用，并将于 `0.9` 版本中移除（更多信息请见下文）。\n\n\n# 新特性\n\n### 1. 量子贝叶斯推理\n我们引入了一个新的类 `qiskit_machine_learning.algorithms.QBayesian`，该类在表示具有二值随机变量的贝叶斯网络的量子电路上实现了量子贝叶斯推理。\n\n计算复杂度由原来的 $\\mathcal{O}(nmP(e)^{-1})$ 降低至每样本 $\\mathcal{O}(n\\ 2^{m}P(e)^{-\\frac{1}{2}})$，其中 $n$ 是贝叶斯网络中的节点数，每个节点最多有 $m$ 个父节点，$e$ 表示证据。用户至少需要提供一个能够表示该贝叶斯网络的量子电路。只要该电路能够表示贝叶斯网络的联合概率分布，就可以以多种形式传入。需要注意的是，`QBayesian` 对电路中的量子比特顺序进行了定义：电路中的最后一个量子比特将对应于联合概率分布中的最高有效位。例如，如果随机变量 A、B 和 C 按此顺序输入到电路中，且取值为 ($A=1, B=0$ 和 $C=0$)，则该概率将由量子态 $001$ 的概率幅来表示。\n\n使用该类的示例如下：\n```python\nfrom qiskit import QuantumCircuit\nfrom qiskit_machine_learning.algorithms import QBayesian\n\n# 定义一个量子电路\nqc = QuantumCircuit(...)\n\n# 初始化框架\nqb = QBayesian(qc)\n\n# 执行推理\nresult = qb.inference(query={...}, evidence={...})\n\nprint(\"给定证据下查询的概率：\", result)\n```\n您还可以参考 [QBI 教程](https:\u002F\u002Fgithub.com\u002Fqiskit-community\u002Fqiskit-machine-learning\u002Fblob\u002Fstable\u002F0.8\u002Fdocs\u002Ftutorials\u002F13_quantum_bayesian_inference.ipynb)，其中详细介绍了在贝叶斯网络上进行量子贝叶斯推理的逐步操作方法。\n\n### 2. 支持 Python `3.12`\n\n新增对 Python `3.12` 的支持，用户现在可以使用 Qiskit Machine Learning 与 Python `3.12` 一起工作。\n\n### 3. 合并 Qiskit Algorithms 功能\n\n将 Qiskit Algorithms 中的核心功能迁移到 Qiskit Machine Learning 中。同时，Qiskit Machine Learning 现在要求 Qiskit 版本不低于 `1.0`。根据您的配置，可能还需要相应升级 Qiskit Aer。由于部分 Qiskit Algorithms 的功能被合并到 Qiskit Machine Learning 中，可能会导致一些破坏性变更。因此，在项目的关键生产阶段升级到 `0.8` 版本时，请务必谨慎。这一变更确保了 Qiskit Machine Learning 能够持续增强和维护其核心功能。","2024-11-11T14:03:22",{"id":177,"version":178,"summary_zh":179,"released_at":180},72853,"0.7.2","# 更改日志\n## 新特性\n- 添加了对使用 Qiskit 机器学习与 Python 3.12 的支持。\n\n## 错误修复\n- 为 [FidelityQuantumKernel](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.FidelityQuantumKernel.html#qiskit_machine_learning.kernels.FidelityQuantumKernel) 添加了 `max_circuits_per_job` 参数。当提交的线路数量超过后端作业限制时，这些线路将被拆分并在不同的作业中运行。\n\n- 移除了 [QuantumKernelTrainer](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer.html#qiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer) 对 `copy.deepcopy` 的依赖，该依赖在真实后端上会引发错误。现在，它会直接在原地修改 [TrainableKernel](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.TrainableKernel.html#qiskit_machine_learning.kernels.TrainableKernel)。如果您希望使用初始核，请调用 [TrainableKernel](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.TrainableKernel.html#qiskit_machine_learning.kernels.TrainableKernel) 的 [assign_training_parameters()](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.TrainableKernel.html#qiskit_machine_learning.kernels.TrainableKernel.assign_training_parameters)，并使用 [QuantumKernelTrainer](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer.html#qiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer) 的 `initial_point` 属性。\n\n- 修复了量子神经网络中输入和权重绑定顺序可能不正确的问题。尽管已为输入和权重指定了参数，但之前的代码仍按照电路参数给出的顺序来绑定输入和权重。对于 Qiskit 电路库中最常用的特征映射和 ansatz，由于默认参数名称通常符合预期顺序，因此这种做法一般不会出错。然而，对于自定义参数名称等情况，这种顺序并不总是正确的，从而导致意外行为。现在，无论整体电路中参数的排列顺序如何，都将始终按照分别提供的输入和权重参数序列来进行绑定。\n\n- 修复了一个错误：[FidelityStatevectorKernel](https:\u002F\u002Fqiskit-community.github.io\u002Fqiskit-machine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.FidelityStatevectorKernel.html#qiskit_machine_learning.kernels","2024-02-29T16:40:04",{"id":182,"version":183,"summary_zh":184,"released_at":185},72854,"0.7.1","# 更改日志\r\n\r\n此错误修复版本修复了指向 Qiskit Medium 博文的链接，该博文宣布应用模块已迁移至 qiskit-community 组织。","2023-12-01T12:09:29",{"id":187,"version":188,"summary_zh":189,"released_at":190},72855,"0.7.0","# 引言\n\nQiskit 机器学习已迁移到 [qiskit-community GitHub 组织](https:\u002F\u002Fgithub.com\u002Fqiskit-community)，以进一步强调其作为社区驱动项目的定位。为反映这一变化，并且由于我们正在引入更多的代码所有者和维护者，因此在本版本（0.7）中，我们决定移除所有已弃用的代码，无论其被弃用的时间长短。这样做可以确保新加入的开发团队成员无需维护大量遗留代码。对于您这位最终用户而言，这意味着以下两种情况之一：\n\n- 如果您已经迁移了代码，并且不再依赖任何已弃用的功能，则不会有任何影响。\n- 否则，您需要确保您的工作流不依赖于已弃用的类。如果您无法做到这一点，或者希望继续使用已被移除的部分功能，则应将 Qiskit 机器学习的版本固定为 0.6。\n\n有关 Qiskit 机器学习及其他应用项目以及 Qiskit 中算法库相关变更的更多背景信息，请务必阅读这篇 [博客文章](https:\u002F\u002Fibm.biz\u002FBdSyNm)。\n\n# 新特性\n\n- [QNNCircuit](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.circuit.library.QNNCircuit.html#qiskit_machine_learning.circuit.library.QNNCircuit) 类现在可以作为电路传递给 [SamplerQNN](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.SamplerQNN.html#qiskit_machine_learning.neural_networks.SamplerQNN) 和 [EstimatorQNN](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html#qiskit_machine_learning.neural_networks.EstimatorQNN)。这简化了接口，使得您可以基于特征映射和 ansatz 电路构建基于 [Sampler](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.primitives.Sampler.html#qiskit.primitives.Sampler) 或 [Estimator](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.primitives.Estimator.html#qiskit.primitives.Estimator) 的神经网络实现。使用 [QNNCircuit](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.circuit.library.QNNCircuit.html#qiskit_machine_learning.circuit.library.QNNCircuit) 的优势在于，无需显式地将特征映射和 ansatz 组合在一起。当将 [QNNCircuit](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.circuit.library.QNNCircuit.html#qiskit_machine_learning.circuit.library.QNNCircuit) 传递给 [SamplerQNN](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.SamplerQNN.html#qiskit_machine_learning.neural_networks.SamplerQNN) 或 [EstimatorQNN](https:\u002F\u002Fqiskit.org\u002Fecosystem\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html#qiskit_machine_learning.neural_networks.EstimatorQNN) 时，无需再提供输入参数和权重参数，因为这些属性将直接从 [QNNCirc","2023-11-10T16:24:49",{"id":192,"version":193,"summary_zh":194,"released_at":195},72856,"0.6.1","# 更改日志\n \n## 错误修复\n \n- 兼容性修复，以支持 Python 3.11。\n \n- 修复了 `qiskit_machine_learning.datasets.discretize_and_truncate()` 函数在 NumPy 1.24 版本中的问题。该函数被 QGAN 实现所使用。\n","2023-05-09T07:59:50",{"id":197,"version":198,"summary_zh":199,"released_at":200},72857,"0.6.0","# Changelog\r\n\r\n## New Features\r\n\r\n- Allow callable as an optimizer in NeuralNetworkClassifier, VQC, NeuralNetworkRegressor, VQR, as well as in [QuantumKernelTrainer](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer.html#qiskit_machine_learning.kernels.algorithms.QuantumKernelTrainer).\r\n\r\n    Now, the optimizer can either be one of Qiskit’s optimizers, such as [SPSA](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.algorithms.optimizers.SPSA.html#qiskit.algorithms.optimizers.SPSA) or a callable with the following signature:\r\n\r\n```python\r\n      from qiskit.algorithms.optimizers import OptimizerResult\r\n  \r\n      def my_optimizer(fun, x0, jac=None, bounds=None) -> OptimizerResult:\r\n          # Args:\r\n          #     fun (callable): the function to minimize\r\n          #     x0 (np.ndarray): the initial point for the optimization\r\n          #     jac (callable, optional): the gradient of the objective function\r\n          #     bounds (list, optional): a list of tuples specifying the parameter bounds\r\n          result = OptimizerResult()\r\n          result.x = # optimal parameters\r\n          result.fun = # optimal function value\r\n          return result\r\n```\r\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;The above signature also allows to directly pass any SciPy minimizer, for instance as\r\n\r\n```python\r\n      from functools import partial\r\n      from scipy.optimize import minimize\r\n      optimizer = partial(minimize, method=\"L-BFGS-B\")\r\n```\r\n\r\n- Added a new `FidelityStatevectorKernel` class that is optimized to use only statevector-implemented feature maps. Therefore, computational complexity is reduced from $O(N^2)$ to $O(N)$.\r\n\r\n    Computed statevector arrays are also cached to further increase efficiency. This cache is cleared when the `evaluate` method is called, unless `auto_clear_cache` is `False`. The cache is unbounded by default, but its size can be set by the user, i.e., limited to the number of samples in the worst case.\r\n\r\n    By default the Terra reference `Statevector` is used, however, the type can be specified via the `statevector_type` argument.\r\n\r\n    Shot noise emulation can also be added. If `shots` is `None`, the exact fidelity is used. Otherwise, the mean is taken of samples drawn from a binomial distribution with probability equal to the exact fidelity.\r\n\r\n    With the addition of shot noise, the kernel matrix may no longer be positive semi-definite (PSD). With `enforce_psd` set to `True` this condition is enforced.\r\n\r\n    An example of using this class is as follows:\r\n\r\n```python\r\n    from sklearn.datasets import make_blobs\r\n    from sklearn.svm import SVC\r\n\r\n    from qiskit.circuit.library import ZZFeatureMap\r\n    from qiskit.quantum_info import Statevector\r\n\r\n    from qiskit_machine_learning.kernels import FidelityStatevectorKernel\r\n\r\n    # generate a simple dataset\r\n    features, labels = make_blobs(\r\n        n_samples=20, centers=2, center_box=(-1, 1), cluster_std=0.1\r\n    )\r\n\r\n    feature_map = ZZFeatureMap(feature_dimension=2, reps=2)\r\n    statevector_type = Statevector\r\n\r\n    kernel = FidelityStatevectorKernel(\r\n        feature_map=feature_map,\r\n        statevector_type=Statevector,\r\n        cache_size=len(labels),\r\n        auto_clear_cache=True,\r\n        shots=1000,\r\n        enforce_psd=True,\r\n    )\r\n    svc = SVC(kernel=kernel.evaluate)\r\n    svc.fit(features, labels)\r\n```\r\n\r\n- The PyTorch connector `TorchConnector` now fully supports sparse output in both forward and backward passes. To enable sparse support, first of all, the underlying quantum neural network must be sparse. In this case, if the sparse property of the connector itself is not set, then the connector inherits sparsity from the networks. If the connector is set to be sparse, but the network is not, an exception will be raised. Also you may set the connector to be dense if the network is sparse.\r\n\r\n    This snippet illustrates how to create a sparse instance of the connector.\r\n\r\n```python\r\n    import torch\r\n    from qiskit import QuantumCircuit\r\n    from qiskit.circuit.library import ZFeatureMap, RealAmplitudes\r\n\r\n    from qiskit_machine_learning.connectors import TorchConnector\r\n    from qiskit_machine_learning.neural_networks import SamplerQNN\r\n\r\n    num_qubits = 2\r\n    fmap = ZFeatureMap(num_qubits, reps=1)\r\n    ansatz = RealAmplitudes(num_qubits, reps=1)\r\n    qc = QuantumCircuit(num_qubits)\r\n    qc.compose(fmap, inplace=True)\r\n    qc.compose(ansatz, inplace=True)\r\n\r\n    qnn = SamplerQNN(\r\n        circuit=qc,\r\n        input_params=fmap.parameters,\r\n        weight_params=ansatz.parameters,\r\n        sparse=True,\r\n    )\r\n\r\n    connector = TorchConnector(qnn)\r\n\r\n    output = connector(torch.tensor([[1., 2.]]))\r\n    print(output)\r\n\r\n    loss = torch.sparse.sum(output)\r\n    loss.backward()\r\n\r\n    grad = connector.weight.grad\r\n    print(grad)\r\n```\r\n\r\n&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;In hybrid setup, where a PyTorch-based neural network has classical and quantum l","2023-03-27T20:48:05",{"id":202,"version":203,"summary_zh":204,"released_at":205},72858,"0.5.0","# Changelog\r\n\r\n## New Features\r\n\r\n- Added support for categorical and ordinal labels to [VQC](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQC.html#qiskit_machine_learning.algorithms.VQC). Now labels can be passed in different formats, they can be plain ordinal labels, a one dimensional array that contains integer labels like 0, 1, 2, …, or an array with categorical string labels. One-hot encoded labels are still supported. Internally, labels are transformed to one hot encoding and the classifier is always trained on one hot labels.\r\n- Introduced Estimator Quantum Neural Network ([EstimatorQNN](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html#qiskit_machine_learning.neural_networks.EstimatorQNN)) based on (runtime) primitives. This implementation leverages the estimator primitive (see [BaseEstimator](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.primitives.BaseEstimator.html#qiskit.primitives.BaseEstimator)) and the estimator gradients (see [BaseEstimatorGradient](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.algorithms.gradients.BaseEstimatorGradient.html#qiskit.algorithms.gradients.BaseEstimatorGradient)) to enable runtime access and more efficient computation of forward and backward passes.\r\nThe new [EstimatorQNN](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EstimatorQNN.html#qiskit_machine_learning.neural_networks.EstimatorQNN) exposes a similar interface to the Opflow QNN, with a few differences. One is the quantum_instance parameter. This parameter does not have a direct replacement, and instead the estimator parameter must be used. The gradient parameter keeps the same name as in the Opflow QNN implementation, but it no longer accepts Opflow gradient classes as inputs; instead, this parameter expects an (optionally custom) primitive gradient.\r\nThe existing training algorithms such as [VQR](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.VQR.html#qiskit_machine_learning.algorithms.VQR), that were based on the Opflow QNN, are updated to accept both implementations. The implementation of [NeuralNetworkRegressor](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.algorithms.NeuralNetworkRegressor.html#qiskit_machine_learning.algorithms.NeuralNetworkRegressor) has not changed.\r\n\r\n- Introduced Quantum Kernels based on (runtime) primitives. This implementation leverages the fidelity primitive (see [BaseStateFidelity](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.algorithms.state_fidelities.BaseStateFidelity.html#qiskit.algorithms.state_fidelities.BaseStateFidelity)) and provides more flexibility to end users. The fidelity primitive calculates state fidelities\u002Foverlaps for pairs of quantum circuits and requires an instance of [Sampler](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fstubs\u002Fqiskit.primitives.Sampler.html#qiskit.primitives.Sampler). Thus, users may plug in their own implementations of fidelity calculations.\r\nThe new kernels expose the same interface and the same parameters except the quantum_instance parameter. This parameter does not have a direct replacement and instead the fidelity parameter must be used.\r\n\r\n    A new hierarchy is introduced:\r\n\r\n            - A base and abstract class [BaseKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.BaseKernel.html#qiskit_machine_learning.kernels.BaseKernel) is introduced. All concrete implementation must inherit this class.\r\n\r\n            - A fidelity based quantum kernel [FidelityQuantumKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.FidelityQuantumKernel.html#qiskit_machine_learning.kernels.FidelityQuantumKernel) is added. This is a direct replacement of [QuantumKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.QuantumKernel). The difference is that the new class takes either a sampler or a fidelity instance to estimate overlaps and construct kernel matrix.\r\n\r\n            - A new abstract class [TrainableKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.TrainableKernel.html#qiskit_machine_learning.kernels.TrainableKernel) is introduced to generalize ability to train quantum kernels.\r\n\r\n            - A fidelity-based trainable quantum kernel [TrainableFidelityQuantumKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.TrainableFidelityQuantumKernel.html#qiskit_machine_learning.kernels.TrainableFidelityQuantumKernel) is introduced. This is a replacement of the existing [QuantumKernel](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.kernels.QuantumKernel.html#qiskit_machine_learning.kernels.QuantumKernel) if a trainable kernel is r","2022-11-08T22:41:22",{"id":207,"version":208,"summary_zh":209,"released_at":210},72859,"0.4.0","# Changelog\r\n\r\n## New Features\r\n\r\n- In the previous releases at the backpropagation stage of CircuitQNN and OpflowQNN gradients were computed for each sample in a dataset individually and then the obtained values were aggregated into one output array. Thus, for each sample in a dataset at least one job was submitted. Now, gradients are computed for all samples in a dataset in one go by passing a list of values for a single parameter to CircuitSampler. Therefore, a number of jobs required for such computations is significantly reduced. This improvement may speed up training process in the cloud environment, where queue time for submitting a job may be a major contribution in the overall training time.\r\n\r\n- Introduced two new classes, [EffectiveDimension](https:\u002F\u002Fqiskit.org\u002Fdocumentation\u002Fmachine-learning\u002Fstubs\u002Fqiskit_machine_learning.neural_networks.EffectiveDimension.html#qiskit_machine_learning.neural_networks.EffectiveDimension) and LocalEffectiveDimension, for calculating the capacity of quantum neural network models through the computation of the Fisher Information Matrix. The local effective dimension bounds the generalization error of QNNs and only accepts single parameter sets as inputs. The global effective dimension (or just effective dimension) can be used as a measure of the expressibility of the model, and accepts multiple parameter sets.\r\n\r\n- Objective functions constructed by the neural network classifiers and regressors now include an averaging factor that is evaluated as 1 \u002F number_of_samples. Computed averaged objective values are passed to a user specified callback if any. Users may notice a dramatic decrease in the objective values in their callbacks. This is due to this averaging factor.\r\n\r\n- Added support for saving and loading machine learning models. This support is introduced in TrainableModel, so all sub-classes can be saved and loaded. Also, kernel based models can be saved and loaded. A list of models that support saving and loading models:\r\n\r\n        NeuralNetworkClassifier\r\n\r\n        NeuralNetworkRegressor\r\n\r\n        VQC\r\n\r\n        VQR\r\n\r\n        QSVC\r\n\r\n        QSVR\r\n\r\n        PegasosQSVC\r\n\r\n- When model is saved all model parameters are saved to a file, including a quantum instance that is referenced by internal objects. That means if a model is loaded from a file and is used, for instance, for inference, the same quantum instance and a corresponding backend will be used even if a cloud backend was used.\r\n\r\n- Added a new feature in CircuitQNN that ensures unbound_pass_manager is called when caching the QNN circuit and that bound_pass_manager is called when QNN parameters are assigned.\r\n\r\n- Added a new feature in QuantumKernel that ensures the bound_pass_manager is used, when provided via the QuantumInstance, when transpiling the kernel circuits.\r\n\r\n## Upgrade Notes\r\n    \r\n- Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.\r\n\r\n- The previously deprecated BaseBackend class has been removed. It was originally deprecated in the Qiskit Terra 0.18.0 release.\r\n\r\n- Support for running with Python 3.6 has been removed. To run Machine Learning you need a minimum Python version of 3.7.\r\n\r\n## Deprecation Notes\r\n\r\n- The functions breast_cancer, digits, gaussian, iris and wine in the datasets module are deprecated and should not be used.\r\n\r\n- Class CrossEntropySigmoidLoss is deprecated and marked for removal.\r\n\r\n- Removed support of l2 and l1 values as loss function definitions. Please, use absolute_error and squared_error respectively.\r\n\r\n## Bug Fixes\r\n\r\n- Fixes in Ad Hoc dataset. Fixed an ValueError when n=3 is passed to ad_hoc_data. When the value of n is not 2 or 3, a ValueError is raised with a message that the only supported values of n are 2 and 3.\r\n\r\n- Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.\r\n\r\n- Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.\r\n\r\n- Fixes an is","2022-04-29T17:12:45",{"id":212,"version":213,"summary_zh":214,"released_at":215},72860,"0.3.1","# Changelog\r\n\r\n## Upgrade Notes\r\n\r\n-  Added support for running with Python 3.10. At the the time of the release, Torch didn’t have a python 3.10 version.\r\n\r\n## Bug Fixes\r\n\r\n- Fixes in Ad Hoc dataset. Fixed an ValueError when n=3 is passed to ad_hoc_data. When the value of n is not 2 or 3, a ValueError is raised with a message that the only supported values of n are 2 and 3.\r\n\r\n- Previously, VQC would throw an error if trained on batches of data where not all of the target labels that can be found in the full dataset were present. This is because VQC interpreted the number of unique targets in the current batch as the number of classes. Currently, VQC is hard-coded to expect one-hot-encoded targets. Therefore, VQC will now determine the number of classes from the shape of the target array.\r\n\r\n- Fixes an issue where VQC could not be trained on multiclass datasets. It returned nan values on some iterations. This is fixed in 2 ways. First, the default parity function is now guaranteed to be able to assign at least one output bitstring to each class, so long as 2**N >= C where N is the number of output qubits and C is the number of classes. This guarantees that it is at least possible for every class to be predicted with a non-zero probability. Second, even with this change it is still possible that on a given training instance a class is predicted with 0 probability. Previously this could lead to nan in the CrossEntropyLoss calculation. We now replace 0 probabilities with a small positive value to ensure the loss cannot return nan.\r\n\r\n- Fixes an issue where VQC would fail with warm_start=True. The extraction of the initial_point in TrainableModel from the final point of the minimization had not been updated to reflect the refactor of optimizers in qiskit-terra; the old optimize method, that returned a tuple was deprecated and new method minimize was created that returns an OptimizerResult object. We now correctly recover the final point of the minimization from previous fits to use for a warm start in subsequent fits.","2022-02-17T23:18:27",{"id":217,"version":218,"summary_zh":219,"released_at":220},72861,"0.3.0","# Changelog\r\n\r\n## New Features\r\n\r\n-  Addition of a QuantumKernelTrainer object which may be used by kernel-based machine learning algorithms to perform optimization of some QuantumKernel parameters before training the model. Addition of a new base class, KernelLoss, in the loss_functions package. Addition of a new KernelLoss subclass, SVCLoss.\r\n\r\n-  The class TrainableModel, and its sub-classes NeuralNetworkClassifier, NeuralNetworkRegressor, VQR, VQC, have a new optional argument callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.\r\n\r\n-  Classification models (i.e. models that extend the NeuralNetworkClassifier class like VQC) can now handle categorical target data in methods like fit() and score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the one_hot argument supplied when instantiating the model.\r\n\r\n-  There’s an additional transpilation step introduced in CircuitQNN that is invoked when a quantum instance is set. A circuit passed to CircuitQNN is transpiled and saved for subsequent usages. So, every time when the circuit is executed it is already transpiled and overall time of the forward pass is reduced. Due to implementation limitations of RawFeatureVector it can’t be transpiled in advance, so it is transpiled every time it is required to be executed and only when all parameters are bound. This means overall performance when RawFeatureVector is used stays the same.\r\n\r\n-  Introduced a new classification algorithm, which is an alternative version of the Quantum Support Vector Classifier (QSVC) that is trained via the Pegasos algorithm from https:\u002F\u002Fhome.ttic.edu\u002F~nati\u002FPublications\u002FPegasosMPB.pdf instead of the dual optimization problem like in sklearn. This algorithm yields a training complexity that is independent of the size of the training set (see the to be published Master’s Thesis “Comparing Quantum Neural Networks and Quantum Support Vector Machines” by Arne Thomsen), such that the PegasosQSVC is expected to train faster than QSVC for sufficiently large training sets.\r\n\r\n- QuantumKernel transpiles all circuits before execution. However, this\r\ninformation was not being passed, which calls the transpiler many times during the execution of the QSVC\u002FQSVR algorithm. Now, had_transpiled=True is passed correctly and the algorithm runs faster.\r\n\r\n- QuantumKernel now provides an interface for users to specify a new class field, user_parameters. User parameters are an array of Parameter objects corresponding to parameterized quantum gates in the feature map circuit the user wishes to tune. This is useful in algorithms where feature map parameters must be bound and re-bound many times (i.e. variational algorithms). Users may also use a new function assign_user_parameters to assign real values to some or all of the user parameters in the feature map.\r\n\r\n- Introduce the TorchRuntimeClient for training a quantum model or a hybrid quantum-classical model faster using Qiskit Runtime. It can also be used for predicting the result using the trained model or calculating the score of the trained model faster using Qiskit Runtime.\r\n\r\n## Known Issues\r\n\r\n- If positional arguments are passed into QSVR or QSVC and these classes are printed, an exception is raised.\r\n\r\n## Deprecation Notes\r\n\r\n- Positional arguments in QSVR and QSVC are deprecated.\r\n\r\n## Bug Fixes\r\n\r\n- Fixed a bug in QuantumKernel where for statevector simulator all circuits were constructed and transpiled at once, leading to high memory usage. Now the circuits are batched similarly to how it was previously done for non-statevector simulators (same flag is used for both now; previously batch_size was silently ignored by statevector simulator)\r\n\r\n- Fix a bug where TorchConnector failed on backward pass computation due to empty parameters for inputs or weights. Validation added to qiskit_machine_learning.neural_networks.NeuralNetwork._validate_backward_output().\r\n\r\n - TwoLayerQNN now passes the value of the exp_val parameter in the constructor to the constructor of OpflowNN which TwoLayerQNN inherits from.\r\n\r\n - In some configurations forward pass of a neural network may return the same value across multiple calls even if different weights are passed. This behavior is confirmed with AQGD optimizer. This was due to a bug in the implementation of the objective functions. They cache a value obtained at the forward pass to be re-used in the backward pass. Initially, this cache was based on an identifier (a call of id() function) of the weights array. AQGD re-uses the same array for weig","2021-12-15T16:29:01",{"id":222,"version":223,"summary_zh":224,"released_at":225},72862,"0.2.1","# Changelog\r\n\r\n## Added\r\n\r\n- The class TrainableModel, and its sub-classes NeuralNetworkClassifier, NeuralNetworkRegressor, VQR, VQC, have a new optional argument callback. User can optionally provide a callback function that can access the intermediate training data to track the optimization process, else it defaults to None. The callback function takes in two parameters: the weights for the objective function and the computed objective value. For each iteration an optimizer invokes the callback and passes current weights and computed value of the objective function.\r\n-  Classification models (i.e. models that extend the NeuralNetworkClassifier class like VQC) can now handle categorical target data in methods like fit() and score(). Categorical data is inferred from the presence of string type data and is automatically encoded using either one-hot or integer encodings. Encoder type is determined by the one_hot argument supplied when instantiating the model.\r\n\r\n# Fixed\r\n\r\n- Fix a bug, where qiskit_machine_learning.circuit.library.RawFeatureVector.copy() didn’t copy all internal settings which could lead to issues with the copied circuit. As a consequence qiskit_machine_learning.circuit.library.RawFeatureVector.bind_parameters() is also fixed.\r\n\r\n- The QNN weight parameter in TorchConnector is now registered in the torch DAG as weight, instead of _weights. This is consistent with the PyTorch naming convention and the weight property used to get access to the computed weights.\r\n\r\n","2021-08-24T12:33:14",{"id":227,"version":228,"summary_zh":229,"released_at":230},72863,"0.2.0","# Changelog\r\n\r\n## Added\r\n\r\n- A base class TrainableModel is introduced for machine learning models. This class follows Scikit-Learn principles and makes the quantum machine learning compatible with classical models. Both NeuralNetworkClassifier and NeuralNetworkRegressor extend this class. A base class ObjectiveFunction is introduced for objective functions optimized by machine learning models. There are three objective functions introduced that are used by ML models: BinaryObjectiveFunction, MultiClassObjectiveFunction, and OneHotObjectiveFunction. These functions are used internally by the models.\r\n- The optimizer argument for the classes NeuralNetworkClassifier and NeuralNetworkRegressor, both of which extends the TrainableModel class, is made optional with the default value being SLSQP(). The same is true for the classes VQC and VQR as they inherit from NeuralNetworkClassifier and NeuralNetworkRegressor respectively.\r\n- The constructor of NeuralNetwork, and all classes that inherit from it, has a new parameter input_gradients which defaults to False. Previously this parameter could only be set using the setter method. Note that TorchConnector previously set input_gradients of the NeuralNetwork it was instantiated with to True. This is not longer the case. So if you use TorchConnector and want to compute the gradients w.r.t. the input, make sure you set input_gradients=True on the NeuralNetwork before passing it to TorchConnector.\r\n- Added a parameter initial_point to the neural network classifiers and regressors. This an array that is passed to an optimizer as an initial point to start from.\r\n- Computation of gradients with respect to input data in the backward method of NeuralNetwork is now optional. By default gradients are not computed. They may inspected and turned on, if required, by getting or setting a new property input_gradients in the NeuralNetwork class.\r\n- Now NeuralNetworkClassifier extends ClassifierMixin and NeuralNetworkRegressor extends RegressorMixin from Scikit-Learn and rely on their methods for score calculation. This also adds an ability to pass sample weights as an optional parameter to the score methods.\r\n\r\n## Changed\r\n\r\n- The valid values passed to the loss argument of the TrainableModel constructor were partially deprecated (i.e. loss='l1' is replaced with loss='absolute_error' and loss='l2' is replaced with loss='squared_error'). This affects instantiation of classes like the NeuralNetworkClassifier. This change was made to reduce confusion that stems from using lowercase ‘l’ character which can be mistaken for a numeric ‘1’ or capital ‘I’. You should update your model instantiations by replacing ‘l1’ with ‘absolute_error’ and ‘l2’ with ‘squared_error’.\r\n- The weights property in TorchConnector is deprecated in favor of the weight property which is PyTorch compatible. By default, PyTorch layers expose weight properties to get access to the computed weights.\r\n\r\n## Fixed \r\n\r\n- This fixes the exception that occurs when no optimizer argument is passed to NeuralNetworkClassifier and NeuralNetworkRegressor.\r\n- Fixes the computation of gradients in TorchConnector when a batch of input samples is provided.\r\n- TorchConnector now returns the correct input gradient dimensions during the backward pass in hybrid nn training.\r\n- Added a dedicated handling of ComposedOp as a operator in OpflowQNN. In this case output shape is determined from the first operator in the ComposedOp instance.\r\n- Fix the dimensions of the gradient in the quantum generator for the qGAN training.\r\n\r\n","2021-07-12T21:15:58",{"id":232,"version":233,"summary_zh":234,"released_at":235},72864,"0.1.0","# First Release","2021-04-01T21:17:59"]