[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-MatthewJA--Inverse-Reinforcement-Learning":3,"tool-MatthewJA--Inverse-Reinforcement-Learning":65},[4,23,32,40,49,57],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":22},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",85092,2,"2026-04-10T11:13:16",[13,14,15,16,17,18,19,20,21],"图像","数据工具","视频","插件","Agent","其他","语言模型","开发框架","音频","ready",{"id":24,"name":25,"github_repo":26,"description_zh":27,"stars":28,"difficulty_score":29,"last_commit_at":30,"category_tags":31,"status":22},5784,"funNLP","fighting41love\u002FfunNLP","funNLP 是一个专为中文自然语言处理（NLP）打造的超级资源库，被誉为\"NLP 民工的乐园”。它并非单一的软件工具，而是一个汇集了海量开源项目、数据集、预训练模型和实用代码的综合性平台。\n\n面对中文 NLP 领域资源分散、入门门槛高以及特定场景数据匮乏的痛点，funNLP 提供了“一站式”解决方案。这里不仅涵盖了分词、命名实体识别、情感分析、文本摘要等基础任务的标准工具，还独特地收录了丰富的垂直领域资源，如法律、医疗、金融行业的专用词库与数据集，甚至包含古诗词生成、歌词创作等趣味应用。其核心亮点在于极高的全面性与实用性，从基础的字典词典到前沿的 BERT、GPT-2 模型代码，再到高质量的标注数据和竞赛方案，应有尽有。\n\n无论是刚刚踏入 NLP 领域的学生、需要快速验证想法的算法工程师，还是从事人工智能研究的学者，都能在这里找到急需的“武器弹药”。对于开发者而言，它能大幅减少寻找数据和复现模型的时间；对于研究者，它提供了丰富的基准测试资源和前沿技术参考。funNLP 以开放共享的精神，极大地降低了中文自然语言处理的开发与研究成本，是中文 AI 社区不可或缺的宝藏仓库。",79857,1,"2026-04-08T20:11:31",[19,14,18],{"id":33,"name":34,"github_repo":35,"description_zh":36,"stars":37,"difficulty_score":29,"last_commit_at":38,"category_tags":39,"status":22},5773,"cs-video-courses","Developer-Y\u002Fcs-video-courses","cs-video-courses 是一个精心整理的计算机科学视频课程清单，旨在为自学者提供系统化的学习路径。它汇集了全球知名高校（如加州大学伯克利分校、新南威尔士大学等）的完整课程录像，涵盖从编程基础、数据结构与算法，到操作系统、分布式系统、数据库等核心领域，并深入延伸至人工智能、机器学习、量子计算及区块链等前沿方向。\n\n面对网络上零散且质量参差不齐的教学资源，cs-video-courses 解决了学习者难以找到成体系、高难度大学级别课程的痛点。该项目严格筛选内容，仅收录真正的大学层级课程，排除了碎片化的简短教程或商业广告，确保用户能接触到严谨的学术内容。\n\n这份清单特别适合希望夯实计算机基础的开发者、需要补充特定领域知识的研究人员，以及渴望像在校生一样系统学习计算机科学的自学者。其独特的技术亮点在于分类极其详尽，不仅包含传统的软件工程与网络安全，还细分了生成式 AI、大语言模型、计算生物学等新兴学科，并直接链接至官方视频播放列表，让用户能一站式获取高质量的教育资源，免费享受世界顶尖大学的课堂体验。",79792,"2026-04-08T22:03:59",[18,13,14,20],{"id":41,"name":42,"github_repo":43,"description_zh":44,"stars":45,"difficulty_score":46,"last_commit_at":47,"category_tags":48,"status":22},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[17,13,20,19,18],{"id":50,"name":51,"github_repo":52,"description_zh":53,"stars":54,"difficulty_score":46,"last_commit_at":55,"category_tags":56,"status":22},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",75812,"2026-04-17T10:36:11",[19,13,20,18],{"id":58,"name":59,"github_repo":60,"description_zh":61,"stars":62,"difficulty_score":29,"last_commit_at":63,"category_tags":64,"status":22},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,"2026-04-03T21:50:24",[20,18],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":80,"owner_twitter":80,"owner_website":82,"owner_url":83,"languages":84,"stars":89,"forks":90,"last_commit_at":91,"license":92,"difficulty_score":93,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":106,"github_topics":107,"view_count":10,"oss_zip_url":80,"oss_zip_packed_at":80,"status":22,"created_at":110,"updated_at":111,"faqs":112,"releases":143},8662,"MatthewJA\u002FInverse-Reinforcement-Learning","Inverse-Reinforcement-Learning","Implementations of selected inverse reinforcement learning algorithms.","Inverse-Reinforcement-Learning 是一个专注于逆向强化学习（IRL）算法的开源实现库。在传统强化学习中，智能体需要已知奖励函数才能学习策略，而该工具解决了“从专家行为反推奖励函数”的核心难题——即通过观察最优策略或轨迹，推断出驱动这些行为的潜在目标与动机。\n\n该项目完整复现了多种经典 IRL 算法，包括基于线性规划的方法、最大熵 IRL 以及结合深度学习的深度最大熵 IRL，并内置了 Gridworld 和 Objectworld 等标准测试环境供验证使用。其技术亮点在于不仅提供了基础理论代码，还利用 Theano 实现了符号计算支持的深度学习变体，方便用户探索不同状态空间下的奖励恢复效果。\n\nInverse-Reinforcement-Learning 非常适合人工智能研究人员、高校学生及算法开发者使用。对于希望深入理解 IRL 数学原理、复现论文实验或将其应用于机器人模仿学习、自动驾驶行为分析等场景的用户来说，这是一套结构清晰、文档详尽的参考实现。虽然部分依赖库如 Theano 已较陈旧，但其核心逻辑仍具极高的学习与研究价值。","# Inverse Reinforcement Learning\n\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.555999.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.555999)\n\nImplements selected inverse reinforcement learning (IRL) algorithms as part of COMP3710, supervised by Dr Mayank Daswani and Dr Marcus Hutter. My final report is available [here](https:\u002F\u002Falger.au\u002Fpdfs\u002Firl.pdf) and describes the implemented algorithms.\n\nIf you use this code in your work, you can cite it as follows:\n```bibtex\n@misc{alger16,\n  author       = {Matthew Alger},\n  title        = {Inverse Reinforcement Learning},\n  year         = 2016,\n  doi          = {10.5281\u002Fzenodo.555999},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.555999}\n}\n```\n\n## Algorithms implemented\n\n- Linear programming IRL. From Ng & Russell, 2000. Small state space and large state space linear programming IRL.\n- Maximum entropy IRL. From Ziebart et al., 2008.\n- Deep maximum entropy IRL. From Wulfmeier et al., 2015; original derivation.\n\nAdditionally, the following MDP domains are implemented:\n- Gridworld (Sutton, 1998)\n- Objectworld (Levine et al., 2011)\n\n## Requirements\n- NumPy\n- SciPy\n- CVXOPT\n- Theano\n- MatPlotLib (for examples)\n\n## Module documentation\n\nFollowing is a brief list of functions and classes exported by modules. Full documentation is included in the docstrings of each function or class; only functions and classes intended for use outside the module are documented here.\n\n### linear_irl\n\nImplements linear programming inverse reinforcement learning (Ng & Russell, 2000).\n\n**Functions:**\n\n- `irl(n_states, n_actions, transition_probability, policy, discount, Rmax, l1)`: Find a reward function with inverse RL.\n- `large_inverseRL(value, transition_probability, feature_matrix, n_states, n_actions, policy)`: Find the reward in a large state space.\n\n### maxent\n    \nImplements maximum entropy inverse reinforcement learning (Ziebart et al., 2008).\n\n**Functions:**\n\n- `irl(feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate)`: Find the reward function for the given trajectories.\n- `find_svf(feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate)`: Find the state visitation frequency from trajectories.\n- `find_feature_expectations(feature_matrix, trajectories)`:  Find the feature expectations for the given trajectories. This is the average path feature vector.\n- `find_expected_svf(n_states, r, n_actions, discount, transition_probability, trajectories)`: Find the expected state visitation frequencies using algorithm 1 from Ziebart et al. 2008.\n- `expected_value_difference(n_states, n_actions, transition_probability, reward, discount, p_start_state, optimal_value, true_reward)`: Calculate the expected value difference, which is a proxy to how good a recovered reward function is.\n\n### deep_maxent\n\nImplements deep maximum entropy inverse reinforcement learning based on Ziebart et al., 2008 and Wulfmeier et al., 2015, using symbolic methods with Theano.\n\n**Functions:**\n\n- `irl(structure, feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate, initialisation=\"normal\", l1=0.1, l2=0.1)`: Find the reward function for the given trajectories.\n- `find_svf(n_states, trajectories)`: Find the state vistiation frequency from trajectories.\n- `find_expected_svf(n_states, r, n_actions, discount, transition_probability, trajectories)`: Find the expected state visitation frequencies using algorithm 1 from Ziebart et al. 2008.\n\n### value_iteration\n\nFind the value function associated with a policy. Based on Sutton & Barto, 1998.\n\n**Functions:**\n\n- `value(policy, n_states, transition_probabilities, reward, discount, threshold=1e-2)`: Find the value function associated with a policy.\n- `optimal_value(n_states, n_actions, transition_probabilities, reward, discount, threshold=1e-2)`: Find the optimal value function.\n- `find_policy(n_states, n_actions, transition_probabilities, reward, discount, threshold=1e-2, v=None, stochastic=True)`: Find the optimal policy.\n\n### mdp\n\n#### gridworld\n\nImplements the gridworld MDP.\n\n**Classes, instance attributes, methods:**\n\n- `Gridworld(grid_size, wind, discount)`: Gridworld MDP.\n    - `actions`: Tuple of (dx, dy) actions.\n    - `n_actions`: Number of actions. int.\n    - `n_states`: Number of states. int.\n    - `grid_size`: Size of grid. int.\n    - `wind`: Chance of moving randomly. float.\n    - `discount`: MDP discount factor. float.\n    - `transition_probability`: NumPy array with shape (n_states, n_actions, n_states) where `transition_probability[si, a, sk]` is the probability of transitioning from state si to state sk under action a.\n    - `feature_vector(i, feature_map=\"ident\")`: Get the feature vector associated with a state integer.\n    - `feature_matrix(feature_map=\"ident\")`: Get the feature matrix for this gridworld.\n    - `int_to_point(i)`: Convert a state int into the corresponding coordinate.\n    - `point_to_int(p)`: Convert a coordinate into the corresponding state int.\n    - `neighbouring(i, k)`: Get whether two points neighbour each other. Also returns true if they are the same point.\n    - `reward(state_int)`: Reward for being in state state_int.\n    - `average_reward(n_trajectories, trajectory_length, policy)`: Calculate the average total reward obtained by following a given policy over n_paths paths.\n    - `optimal_policy(state_int)`: The optimal policy for this gridworld.\n    - `optimal_policy_deterministic(state_int)`: Deterministic version of the optimal policy for this gridworld.\n    - `generate_trajectories(n_trajectories, trajectory_length, policy, random_start=False)`: Generate n_trajectories trajectories with length trajectory_length, following the given policy.\n\n#### objectworld\n\nImplements the objectworld MDP described in Levine et al. 2011.\n\n**Classes, instance attributes, methods:**\n\n- `OWObject(inner_colour, outer_colour)`: Object in objectworld.\n    - `inner_colour`: Inner colour of object. int.\n    - `outer_colour`: Outer colour of object. int.\n\n- `Objectworld(grid_size, n_objects, n_colours, wind, discount)`: Objectworld MDP.\n    - `actions`: Tuple of (dx, dy) actions.\n    - `n_actions`: Number of actions. int.\n    - `n_states`: Number of states. int.\n    - `grid_size`: Size of grid. int.\n    - `n_objects`: Number of objects in the world. int.\n    - `n_colours`: Number of colours to colour objects with. int.\n    - `wind`: Chance of moving randomly. float.\n    - `discount`: MDP discount factor. float.\n    - `objects`: Set of objects in the world.\n    - `transition_probability`: NumPy array with shape (n_states, n_actions, n_states) where `transition_probability[si, a, sk]` is the probability of transitioning from state si to state sk under action a.\n    - `feature_vector(i, discrete=True)`: Get the feature vector associated with a state integer.\n    - `feature_matrix(discrete=True)`: Get the feature matrix for this gridworld.\n    - `int_to_point(i)`: Convert a state int into the corresponding coordinate.\n    - `point_to_int(p)`: Convert a coordinate into the corresponding state int.\n    - `neighbouring(i, k)`: Get whether two points neighbour each other. Also returns true if they are the same point.\n    - `reward(state_int)`: Reward for being in state state_int.\n    - `average_reward(n_trajectories, trajectory_length, policy)`: Calculate the average total reward obtained by following a given policy over n_paths paths.\n    - `generate_trajectories(n_trajectories, trajectory_length, policy)`: Generate n_trajectories trajectories with length trajectory_length, following the given policy.\n","# 逆强化学习\n\n[![DOI](https:\u002F\u002Fzenodo.org\u002Fbadge\u002FDOI\u002F10.5281\u002Fzenodo.555999.svg)](https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.555999)\n\n作为COMP3710课程的一部分，实现了部分逆强化学习（IRL）算法，由Mayank Daswani博士和Marcus Hutter博士指导。我的最终报告可在[这里](https:\u002F\u002Falger.au\u002Fpdfs\u002Firl.pdf)查阅，其中详细描述了所实现的算法。\n\n如果您在工作中使用了此代码，可以按如下方式引用：\n```bibtex\n@misc{alger16,\n  author       = {Matthew Alger},\n  title        = {Inverse Reinforcement Learning},\n  year         = 2016,\n  doi          = {10.5281\u002Fzenodo.555999},\n  url          = {https:\u002F\u002Fdoi.org\u002F10.5281\u002Fzenodo.555999}\n}\n```\n\n## 已实现的算法\n\n- 线性规划逆强化学习。源自Ng & Russell, 2000。包括小状态空间和大状态空间的线性规划逆强化学习。\n- 最大熵逆强化学习。源自Ziebart et al., 2008。\n- 深度最大熵逆强化学习。源自Wulfmeier et al., 2015；基于原始推导。\n\n此外，还实现了以下MDP领域：\n- Gridworld（Sutton, 1998）\n- Objectworld（Levine et al., 2011）\n\n## 需求\n\n- NumPy\n- SciPy\n- CVXOPT\n- Theano\n- MatPlotLib（用于示例）\n\n## 模块文档\n\n以下是各模块导出的函数和类的简要列表。完整的文档包含在每个函数或类的docstring中；此处仅列出那些 intended for use outside the module 的函数和类。\n\n### linear_irl\n\n实现线性规划逆强化学习（Ng & Russell, 2000）。\n\n**函数：**\n\n- `irl(n_states, n_actions, transition_probability, policy, discount, Rmax, l1)`：通过逆强化学习找到奖励函数。\n- `large_inverseRL(value, transition_probability, feature_matrix, n_states, n_actions, policy)`：在大状态空间中找到奖励。\n\n### maxent\n\n实现最大熵逆强化学习（Ziebart et al., 2008）。\n\n**函数：**\n\n- `irl(feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate)`：根据给定轨迹找到奖励函数。\n- `find_svf(feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate)`：从轨迹中找到状态访问频率。\n- `find_feature_expectations(feature_matrix, trajectories)`：计算给定轨迹的特征期望值，即平均路径特征向量。\n- `find_expected_svf(n_states, r, n_actions, discount, transition_probability, trajectories)`：使用Ziebart et al. 2008中的算法1，找到预期的状态访问频率。\n- `expected_value_difference(n_states, n_actions, transition_probability, reward, discount, p_start_state, optimal_value, true_reward)`：计算预期价值差异，用以评估恢复的奖励函数的好坏。\n\n### deep_maxent\n\n基于Ziebart et al., 2008和Wulfmeier et al., 2015，利用Theano的符号方法实现深度最大熵逆强化学习。\n\n**函数：**\n\n- `irl(structure, feature_matrix, n_actions, discount, transition_probability, trajectories, epochs, learning_rate, initialisation=\"normal\", l1=0.1, l2=0.1)`：根据给定轨迹找到奖励函数。\n- `find_svf(n_states, trajectories)`：从轨迹中找到状态访问频率。\n- `find_expected_svf(n_states, r, n_actions, discount, transition_probability, trajectories)`：使用Ziebart et al. 2008中的算法1，找到预期的状态访问频率。\n\n### value_iteration\n\n根据策略计算其对应的值函数。基于Sutton & Barto, 1998。\n\n**函数：**\n\n- `value(policy, n_states, transition_probabilities, reward, discount, threshold=1e-2)`：计算给定策略的值函数。\n- `optimal_value(n_states, n_actions, transition_probabilities, reward, discount, threshold=1e-2)`：计算最优值函数。\n- `find_policy(n_states, n_actions, transition_probabilities, reward, discount, threshold=1e-2, v=None, stochastic=True)`：找到最优策略。\n\n### mdp\n\n#### gridworld\n\n实现网格世界MDP。\n\n**类、实例属性、方法：**\n\n- `Gridworld(grid_size, wind, discount)`: 网格世界MDP。\n  - `actions`: 动作的元组，形式为(dx, dy)。\n  - `n_actions`: 动作数量。整数。\n  - `n_states`: 状态数量。整数。\n  - `grid_size`: 网格大小。整数。\n  - `wind`: 随机移动的概率。浮点数。\n  - `discount`: MDP折扣因子。浮点数。\n  - `transition_probability`: 形状为(n_states, n_actions, n_states)的NumPy数组，其中`transition_probability[si, a, sk]`表示在动作a下从状态si转移到状态sk的概率。\n  - `feature_vector(i, feature_map=\"ident\")`: 获取与状态整数i关联的特征向量。\n  - `feature_matrix(feature_map=\"ident\")`: 获取该网格世界的特征矩阵。\n  - `int_to_point(i)`: 将状态整数转换为对应的坐标。\n  - `point_to_int(p)`: 将坐标转换为对应的状态整数。\n  - `neighbouring(i, k)`: 判断两个点是否相邻。如果两点相同，也返回True。\n  - `reward(state_int)`: 处于状态state_int时的奖励。\n  - `average_reward(n_trajectories, trajectory_length, policy)`: 计算按照给定策略在n_paths条轨迹上获得的平均总奖励。\n  - `optimal_policy(state_int)`: 该网格世界的最优策略。\n  - `optimal_policy_deterministic(state_int)`: 该网格世界的确定性最优策略。\n  - `generate_trajectories(n_trajectories, trajectory_length, policy, random_start=False)`: 按照给定策略生成n_trajectories条长度为trajectory_length的轨迹。\n\n#### objectworld\n\n实现Levine等2011年描述的对象世界MDP。\n\n**类、实例属性、方法：**\n\n- `OWObject(inner_colour, outer_colour)`: 对象世界中的对象。\n  - `inner_colour`: 对象的内层颜色。整数。\n  - `outer_colour`: 对象的外层颜色。整数。\n\n- `Objectworld(grid_size, n_objects, n_colours, wind, discount)`: 对象世界MDP。\n  - `actions`: 动作的元组，形式为(dx, dy)。\n  - `n_actions`: 动作数量。整数。\n  - `n_states`: 状态数量。整数。\n  - `grid_size`: 网格大小。整数。\n  - `n_objects`: 世界中对象的数量。整数。\n  - `n_colours`: 可用于给对象上色的颜色种类数。整数。\n  - `wind`: 随机移动的概率。浮点数。\n  - `discount`: MDP折扣因子。浮点数。\n  - `objects`: 世界中对象的集合。\n  - `transition_probability`: 形状为(n_states, n_actions, n_states)的NumPy数组，其中`transition_probability[si, a, sk]`表示在动作a下从状态si转移到状态sk的概率。\n  - `feature_vector(i, discrete=True)`: 获取与状态整数i关联的特征向量。\n  - `feature_matrix(discrete=True)`: 获取该网格世界的特征矩阵。\n  - `int_to_point(i)`: 将状态整数转换为对应的坐标。\n  - `point_to_int(p)`: 将坐标转换为对应的状态整数。\n  - `neighbouring(i, k)`: 判断两个点是否相邻。如果两点相同，也返回True。\n  - `reward(state_int)`: 处于状态state_int时的奖励。\n  - `average_reward(n_trajectories, trajectory_length, policy)`: 计算按照给定策略在n_paths条轨迹上获得的平均总奖励。\n  - `generate_trajectories(n_trajectories, trajectory_length, policy)`: 按照给定策略生成n_trajectories条长度为trajectory_length的轨迹。","# Inverse-Reinforcement-Learning 快速上手指南\n\n本指南帮助中国开发者快速部署并使用 **Inverse-Reinforcement-Learning** 项目，该项目实现了多种逆向强化学习（IRL）算法及经典的 MDP 环境（如 Gridworld 和 Objectworld）。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux, macOS 或 Windows (推荐 Linux 以获得最佳兼容性)\n*   **Python 版本**: 建议 Python 3.6+ (注意：由于依赖 `Theano`，部分旧版代码可能对 Python 版本敏感，若遇兼容性问题可尝试 Python 3.7)\n*   **核心依赖库**:\n    *   `NumPy`: 数值计算基础\n    *   `SciPy`: 科学计算工具\n    *   `CVXOPT`: 凸优化求解器 (用于线性规划 IRL)\n    *   `Theano`: 符号数学库 (用于深度最大熵 IRL)\n    *   `MatPlotLib`: 可视化绘图 (用于运行示例)\n\n> **国内加速建议**：安装依赖时，推荐使用清华或阿里云镜像源以提升下载速度。\n\n## 安装步骤\n\n### 1. 克隆项目代码\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fmbalmer\u002FInverse-Reinforcement-Learning.git\ncd Inverse-Reinforcement-Learning\n```\n\n### 2. 安装 Python 依赖\n\n使用 `pip` 安装所需库。为确保下载速度，以下命令配置了清华大学开源软件镜像源：\n\n```bash\npip install numpy scipy cvxopt matplotlib -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**安装 Theano**:\n由于 `Theano` 已停止官方维护且安装较为复杂，建议直接通过 pip 安装其最后稳定版，或根据具体系统参考 Theano 官方文档配置 GPU 支持（如需加速深度学习部分）：\n\n```bash\npip install theano -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n*注：如果在安装 `cvxopt` 时遇到编译错误，请确保系统已安装 C\u002FC++ 编译器（如 Linux 下的 `build-essential` 或 Windows 下的 Visual Studio Build Tools）。*\n\n## 基本使用\n\n本项目模块化设计清晰，主要包含 `linear_irl` (线性规划), `maxent` (最大熵), `deep_maxent` (深度最大熵) 以及 MDP 环境 (`gridworld`, `objectworld`)。\n\n以下是一个最简单的使用示例：构建一个 **Gridworld** 环境，生成演示轨迹，并使用 **最大熵 IRL (MaxEnt)** 算法恢复奖励函数。\n\n### 示例代码：基于最大熵的 Gridworld IRL\n\n```python\nimport numpy as np\nfrom mdp.gridworld import Gridworld\nfrom maxent import irl as maxent_irl\n\n# 1. 初始化 Gridworld 环境\n# 参数：网格大小 5x5, 随机风概率 0.1, 折扣因子 0.9\nenv = Gridworld(grid_size=5, wind=0.1, discount=0.9)\n\n# 2. 获取环境的转移概率和特征矩阵\ntransition_prob = env.transition_probability\nfeature_matrix = env.feature_matrix()\n\n# 3. 生成专家演示轨迹 (Trajectories)\n# 这里使用环境的最优策略来模拟\"专家\"行为\nn_trajectories = 50\ntrajectory_length = 20\npolicy = env.optimal_policy_deterministic\ntrajectories = env.generate_trajectories(n_trajectories, trajectory_length, policy)\n\n# 4. 运行最大熵逆向强化学习算法\n# 参数说明：\n# feature_matrix: 状态特征矩阵\n# n_actions: 动作数量\n# discount: 折扣因子\n# transition_probability: 转移概率矩阵\n# trajectories: 专家演示轨迹\n# epochs: 训练迭代次数\n# learning_rate: 学习率\nreward_weights = maxent_irl(\n    feature_matrix=feature_matrix,\n    n_actions=env.n_actions,\n    discount=env.discount,\n    transition_probability=transition_prob,\n    trajectories=trajectories,\n    epochs=1000,\n    learning_rate=0.01\n)\n\nprint(\"恢复的奖励权重:\", reward_weights)\n\n# 5. (可选) 将权重映射回网格进行可视化\nrecovered_reward_grid = np.dot(feature_matrix, reward_weights).reshape((env.grid_size, env.grid_size))\nprint(\"恢复的奖励网格:\\n\", recovered_reward_grid)\n```\n\n### 模块功能简述\n\n*   **`linear_irl`**: 适用于小规模或大规模状态空间的线性规划方法 (Ng & Russell, 2000)。调用 `irl` 或 `large_inverseRL`。\n*   **`maxent`**: 经典最大熵 IRL (Ziebart et al., 2008)。核心函数为 `irl`，需输入轨迹和特征矩阵。\n*   **`deep_maxent`**: 基于 Theano 的深度最大熵 IRL (Wulfmeier et al., 2015)，适用于更复杂的非线性奖励函数拟合。\n*   **`mdp\u002Fgridworld` & `mdp\u002Fobjectworld`**: 内置的标准测试环境，可直接实例化并获取 `transition_probability`、`feature_matrix` 及生成 `trajectories`。","某自动驾驶初创团队正试图让无人配送车在复杂社区中模仿资深人类司机的驾驶风格，而非仅遵循刻板的交通规则。\n\n### 没有 Inverse-Reinforcement-Learning 时\n- **奖励函数设计靠猜**：工程师只能手动编写“减速”、“避让”等规则权重，难以量化老司机那种“既安全又流畅”的微妙平衡，导致车辆行驶生硬。\n- **数据价值未被挖掘**：团队收集了大量人类专家的实际驾驶轨迹数据，但因无法反向推导其背后的决策逻辑，这些数据仅能用于测试，无法直接指导模型训练。\n- **长尾场景应对失效**：面对未明确编码的特殊路况（如狭窄巷道会车），由于缺乏深层奖励机制，车辆往往陷入停滞或做出危险判断，调试周期长达数周。\n\n### 使用 Inverse-Reinforcement-Learning 后\n- **自动还原专家意图**：利用 `maxent` 或 `deep_maxent` 模块，直接输入人类驾驶轨迹，算法自动反推出隐含的奖励函数，精准捕捉到“效率与安全感”的最佳权衡点。\n- **变废为宝的数据闭环**：原本闲置的轨迹数据成为核心资产，通过 `find_feature_expectations` 提取特征期望，让模型快速习得人类司机的隐性经验，无需人工标注规则。\n- **泛化能力显著提升**：基于学习到的深层奖励逻辑，车辆在未见过的复杂场景中也能像人类一样灵活博弈，将新场景的适配时间从数周缩短至几天。\n\nInverse-Reinforcement-Learning 的核心价值在于它将“教机器怎么做”转变为“让机器理解为什么这么做”，实现了从规则驱动到意图驱动的跨越。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FMatthewJA_Inverse-Reinforcement-Learning_38efa9ce.png","MatthewJA","Matthew Alger","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FMatthewJA_b8928fe8.jpg","Astronomer & data wrangler",null,"Australia","https:\u002F\u002Falger.au","https:\u002F\u002Fgithub.com\u002FMatthewJA",[85],{"name":86,"color":87,"percentage":88},"Python","#3572A5",100,1069,238,"2026-04-13T09:42:09","MIT",4,"","未说明 (依赖 Theano，通常支持 CPU 或旧版 NVIDIA GPU，但无具体型号要求)","未说明",{"notes":98,"python":99,"dependencies":100},"该项目发布于 2016 年，核心深度学习功能依赖已停止维护的 Theano 框架。README 中未明确指定操作系统、Python 版本及硬件配置。由于依赖库较旧，在现代环境中直接运行可能面临兼容性问题，建议构建旧的虚拟环境（如 Python 3.5 + Theano 1.0）进行测试。","未说明 (基于 2016 年项目背景及 Theano 依赖，推测为 Python 2.7 或 3.5\u002F3.6)",[101,102,103,104,105],"NumPy","SciPy","CVXOPT","Theano","MatPlotLib",[18],[108,109],"inverse-reinforcement-learning","reinforcement-learning","2026-03-27T02:49:30.150509","2026-04-18T03:33:02.566996",[113,118,123,128,133,138],{"id":114,"question_zh":115,"answer_zh":116,"source_url":117},38787,"运行代码时遇到 'super() takes at least 1 argument (0 given)' 错误，该如何解决？","该代码仅支持 Python 3（推荐 Python 3.4+）。此错误是因为在 Python 2 中使用了 Python 3 风格的 `super()` 调用语法。请确保使用 Python 3 环境运行代码，不要使用 Python 2.7。","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F3",{"id":119,"question_zh":120,"answer_zh":121,"source_url":122},38788,"该项目是基于 Python 2 还是 Python 3 开发的？","该项目是使用 Python 3 实现的。如果在测试代码时遇到关于 `super()` 函数的故障，请确认您正在使用 Python 3 环境。","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F2",{"id":124,"question_zh":125,"answer_zh":126,"source_url":127},38789,"README 中的报告链接失效了，哪里可以查看算法原理？","该问题已通过 Pull Request #13 修复。您可以参考 Ziebart 的原始论文或访问以下链接获取最大熵逆强化学习算法的详细说明：http:\u002F\u002Fwww.cs.cmu.edu\u002F~bziebart\u002Fpublications\u002Fmaximum-entropy-inverse-reinforcement-learning.html","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F12",{"id":129,"question_zh":130,"answer_zh":131,"source_url":132},38790,"修改 Gridworld 的目标状态位置后，线性 IRL (linear_irl) 生成的奖励估计不正确怎么办？","这是因为 `Gridworld` 类中的最优策略（optimal policy）也是硬编码的，默认指向右上角。如果更改了奖励函数（例如将目标状态移到底部右下角），必须同时硬编码与之匹配的新最优策略。示例代码如下：\n\ndef reward(self, state_int):\n    hardcoded_reward = [-1, -1, -1, 5, -5, -1, -1, -5, -1]\n    return hardcoded_reward[state_int]\n\ndef optimal_policy_deterministic(self, state_int):\n    hardcoded_policy = [1, 2, 2, 2, 2, 3, 3, 2, 3]\n    return hardcoded_policy[state_int]","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F17",{"id":134,"question_zh":135,"answer_zh":136,"source_url":137},38791,"代码中状态访问频率（expected_svf）的计算公式与 Ziebart 论文中的描述不一致，是代码错了吗？","代码中的计算逻辑是正确的。实际上是 Ziebart 原始论文中的公式存在笔误。关于该问题的详细解释和更正说明，请参阅作者在此处的备注：http:\u002F\u002Fwww.cs.cmu.edu\u002F~bziebart\u002Fpublications\u002Fmaximum-entropy-inverse-reinforcement-learning.html","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F7",{"id":139,"question_zh":140,"answer_zh":141,"source_url":142},38792,"Gridworld 的转移概率矩阵（transition_probability）某行之和不为 1，这是 Bug 吗？","是的，这是一个已确认的问题。在特定的网格世界配置下（如 `gridworld.Gridworld(5, .3, .2)`），转移概率之和确实可能不等于 1。维护者已确认此为缺陷并计划修复。在使用相关功能时请注意此潜在的概率归一化问题。","https:\u002F\u002Fgithub.com\u002FMatthewJA\u002FInverse-Reinforcement-Learning\u002Fissues\u002F1",[144],{"id":145,"version":146,"summary_zh":147,"released_at":148},314727,"v0.1","在Zenodo上发布。","2017-04-18T10:26:45"]