[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-lipiji--App-DL":3,"tool-lipiji--App-DL":65},[4,17,25,39,48,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",138956,2,"2026-04-05T11:33:21",[13,14,15],"开发框架","Agent","语言模型","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":10,"last_commit_at":23,"category_tags":24,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,15],{"id":26,"name":27,"github_repo":28,"description_zh":29,"stars":30,"difficulty_score":10,"last_commit_at":31,"category_tags":32,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[33,34,35,36,14,37,15,13,38],"图像","数据工具","视频","插件","其他","音频",{"id":40,"name":41,"github_repo":42,"description_zh":43,"stars":44,"difficulty_score":45,"last_commit_at":46,"category_tags":47,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,3,"2026-04-04T04:44:48",[14,33,13,15,37],{"id":49,"name":50,"github_repo":51,"description_zh":52,"stars":53,"difficulty_score":45,"last_commit_at":54,"category_tags":55,"status":16},519,"PaddleOCR","PaddlePaddle\u002FPaddleOCR","PaddleOCR 是一款基于百度飞桨框架开发的高性能开源光学字符识别工具包。它的核心能力是将图片、PDF 等文档中的文字提取出来，转换成计算机可读取的结构化数据，让机器真正“看懂”图文内容。\n\n面对海量纸质或电子文档，PaddleOCR 解决了人工录入效率低、数字化成本高的问题。尤其在人工智能领域，它扮演着连接图像与大型语言模型（LLM）的桥梁角色，能将视觉信息直接转化为文本输入，助力智能问答、文档分析等应用场景落地。\n\nPaddleOCR 适合开发者、算法研究人员以及有文档自动化需求的普通用户。其技术优势十分明显：不仅支持全球 100 多种语言的识别，还能在 Windows、Linux、macOS 等多个系统上运行，并灵活适配 CPU、GPU、NPU 等各类硬件。作为一个轻量级且社区活跃的开源项目，PaddleOCR 既能满足快速集成的需求，也能支撑前沿的视觉语言研究，是处理文字识别任务的理想选择。",74913,"2026-04-05T10:44:17",[15,33,13,37],{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":62,"last_commit_at":63,"category_tags":64,"status":16},3215,"awesome-machine-learning","josephmisiti\u002Fawesome-machine-learning","awesome-machine-learning 是一份精心整理的机器学习资源清单，汇集了全球优秀的机器学习框架、库和软件工具。面对机器学习领域技术迭代快、资源分散且难以甄选的痛点，这份清单按编程语言（如 Python、C++、Go 等）和应用场景（如计算机视觉、自然语言处理、深度学习等）进行了系统化分类，帮助使用者快速定位高质量项目。\n\n它特别适合开发者、数据科学家及研究人员使用。无论是初学者寻找入门库，还是资深工程师对比不同语言的技术选型，都能从中获得极具价值的参考。此外，清单还延伸提供了免费书籍、在线课程、行业会议、技术博客及线下聚会等丰富资源，构建了从学习到实践的全链路支持体系。\n\n其独特亮点在于严格的维护标准：明确标记已停止维护或长期未更新的项目，确保推荐内容的时效性与可靠性。作为机器学习领域的“导航图”，awesome-machine-learning 以开源协作的方式持续更新，旨在降低技术探索门槛，让每一位从业者都能高效地站在巨人的肩膀上创新。",72149,1,"2026-04-03T21:50:24",[13,37],{"id":66,"github_repo":67,"name":68,"description_en":69,"description_zh":70,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":82,"owner_twitter":83,"owner_website":84,"owner_url":85,"languages":81,"stars":86,"forks":87,"last_commit_at":88,"license":81,"difficulty_score":89,"env_os":90,"env_gpu":91,"env_ram":91,"env_deps":92,"category_tags":105,"github_topics":81,"view_count":45,"oss_zip_url":81,"oss_zip_packed_at":81,"status":16,"created_at":106,"updated_at":107,"faqs":108,"releases":114},914,"lipiji\u002FApp-DL","App-DL","Deep Learning and applications in Startups, CV, NLP","App-DL 是一个专注于深度学习的开源资源集合，主要涵盖创业应用、计算机视觉和自然语言处理等领域。它整理了相关的研究论文、教程和实用资料，帮助用户快速了解深度学习在不同场景下的最新进展和应用方法。\n\n这个工具主要解决了学习者和开发者面对海量学术资料时难以筛选和系统化学习的问题。通过将高质量的资源按主题分类汇总，App-DL 降低了入门和跟进前沿技术的门槛，让用户能更高效地找到所需的学习材料或研究参考。\n\nApp-DL 适合人工智能领域的学生、研究人员以及技术创业者使用。对于学术研究者，它提供了强化学习、对话系统、文本生成等方向的经典与最新论文；对于创业者和工程师，它则包含了将深度学习应用于实际业务场景的案例和思路参考。\n\n其内容结构清晰，特别在任务型对话系统、深度强化学习等细分领域收录了较多实践性较强的资料，例如结合规划技术的对话策略学习、端到端的购物对话系统构建等。这些资源有助于用户从理论到实践进行连贯探索。\n\n整体而言，App-DL 是一个侧重实用性与前沿性的深度学习资源导航项目，旨在通过整理分散的知识，帮助用户更便捷地学习和应用相关技术。","### Startups\n  - [机器学习、深度学习、计算机视觉、大数据创业公司 - Startups in AI](https:\u002F\u002Fgithub.com\u002Flipiji\u002FAIStartups)\n\n##  Deep Reinforcement Learning\n - David Silver. \"[Tutorial: Deep Reinforcement Learning](http:\u002F\u002Ficml.cc\u002F2016\u002Ftutorials\u002Fdeep_rl_tutorial.pdf).\" ICML 2016.\n - David Silver’s course. \"[Reinforcement Learning](http:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002FD.Silver\u002Fweb\u002FTeaching.html)\". 2015.\n - Bahdanau, Dzmitry, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. \"[An Actor-Critic Algorithm for Sequence Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07086).\" arXiv preprint arXiv:1607.07086 (2016).\n - Li, Jiwei, Will Monroe, Alan Ritter, and Dan Jurafsky. \"[Deep Reinforcement Learning for Dialogue Generation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01541).\" arXiv preprint arXiv:1606.01541 (2016).\n - Pathak, Deepak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. \"[Curiosity-driven Exploration by Self-supervised Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05363).\" arXiv preprint arXiv:1705.05363 (2017).\n - Keneshloo, Yaser, Tian Shi, Chandan K. Reddy, and Naren Ramakrishnan. \"[Deep Reinforcement Learning For Sequence to Sequence Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09461).\" arXiv preprint arXiv:1805.09461 (2018).\n\n## Dialogue System\n- Jiang, Shaojie, and Maarten de Rijke. \"[Why are Sequence-to-Sequence Models So Dull?](https:\u002F\u002Fstaff.fnwi.uva.nl\u002Fm.derijke\u002Fwp-content\u002Fpapercite-data\u002Fpdf\u002Fjiang-why-2018.pdf).\" report, 2018.\n- Eric Chu, Prashanth Vijayaraghavan, Deb Roy. \"[Learning Personas from Dialogue with Attentive Memory Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.08717).\" EMNLP (2018).\n- Ruizhe Li, Chenghua Lin, Matthew Collinson, Xiao Li, Guanyi Chen. \"[A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.09154).\"  arXiv:1810.09154 (2018).\n\n#### Task-Oriented Dialogue\n- Wen, Tsung-Hsien, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. \"[A network-based end-to-end trainable task-oriented dialogue system](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.04562).\" arXiv preprint arXiv:1604.04562 (2016).\n- Li, Xiujun, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. \"[End-to-end task-completion neural dialogue systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01008).\" arXiv preprint arXiv:1703.01008 (2017).\n- Li, Xiujun, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. \"[A user simulator for task-completion dialogues](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05688).\" arXiv preprint arXiv:1612.05688 (2016).\n- Yan, Zhao, Nan Duan, Peng Chen, Ming Zhou, Jianshe Zhou, and Zhoujun Li. \"[Building Task-Oriented Dialogue Systems for Online Shopping](http:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002FviewPaper\u002F14261).\" In AAAI, pp. 4618-4626. 2017.\n- Peng, Baolin, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. \"[Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-1203).\" ACL, vol. 1, pp. 2182-2192. 2018.\n- Janarthanan Rajendran, Jatin Ganhotra, Satinder Singh, Lazaros Polymenakos. \"[Learning End-to-End Goal-Oriented Dialog with Multiple Answers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.09996).\" arXiv preprint arXiv:1808.09996 (2018).\n\n## Text Generation\n- Rennie, Steven J., Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. \"[Self-critical sequence training for image captioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00563).\" arXiv preprint arXiv:1612.00563 (2016).\n- Lin, Kevin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. \"[Adversarial Ranking for Language Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.11001.pdf).\" arXiv preprint arXiv:1705.11001 (2017).\n- Zhang, Li, Flood Sung, Feng Liu, Tao Xiang, Shaogang Gong, Yongxin Yang, and Timothy M. Hospedales. \"[Actor-Critic Sequence Training for Image Captioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09601).\" arXiv preprint arXiv:1706.09601 (2017).\n- Wiseman, Sam, Stuart M. Shieber, and Alexander M. Rush. \"[Challenges in Data-to-Document Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08052).\" arXiv preprint arXiv:1707.08052 (2017).\n- Lebret, Rémi, David Grangier, and Michael Auli. \"[Neural text generation from structured data with application to the biography domain](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07771).\" arXiv preprint arXiv:1603.07771 (2016).\n- Chisholm, Andrew, Will Radford, and Ben Hachey. \"[Learning to generate one-sentence biographies from Wikidata](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.06235).\" arXiv preprint arXiv:1702.06235 (2017).\n- Sha, Lei, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. \"[Order-Planning Neural Text Generation From Structured Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00155).\" arXiv preprint arXiv:1709.00155 (2017).\n- Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, Jun Wang. \"[Long Text Generation via Adversarial Training with Leaked Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08624).\" arXiv preprint  arXiv:1709.08624 (2017).\n- Guu, Kelvin, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. \"[Generating Sentences by Editing Prototypes](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08878).\" arXiv preprint arXiv:1709.08878 (2017).\n- Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui. \"[Table-to-text Generation by Structure-aware Seq2seq Learnings](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.09724).\" arXiv preprint arXiv:1711.09724 (2017).\n- Kahou, Samira Ebrahimi, Adam Atkinson, Vincent Michalski, Akos Kadar, Adam Trischler, and Yoshua Bengio. \"[FigureQA: An Annotated Figure Dataset for Visual Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07300).\" arXiv preprint arXiv:1710.07300 (2017).\n- Murakami, Soichiro, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. \"[Learning to Generate Market Comments from Stock Prices](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-1126).\" In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1374-1384. 2017.\n- Mueller, Jonas, David Gifford, and Tommi Jaakkola. \"[Sequence to better sequence: continuous revision of combinatorial structures](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmueller17a.html).\" In International Conference on Machine Learning, pp. 2536-2544. 2017.\n- Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer. \"[Generating Wikipedia by Summarizing Long Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.10198).\" ICLR 2018.\n- Clark, Elizabeth, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. \"[Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories](https:\u002F\u002Fhomes.cs.washington.edu\u002F~ansross\u002Fpapers\u002Fiui2018-creativewriting.pdf).\" (2018).\n- Gehrmann, Sebastian, S. E. A. S. Harvard, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. \"[End-to-End Content and Plan Selection for Natural Language Generation](https:\u002F\u002Fscholar.harvard.edu\u002Ffiles\u002Fgehrmann\u002Ffiles\u002Fe2e-harvardnlp.pdf).\"\n- Juncen Li, Robin Jia, He He, Percy Liang. \"[Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.06437).\" arXiv:1804.06437 2018.\n- Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang. \"[Incorporating Pseudo-Parallel Data for Quantifiable Sequence Editing](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07007).\" arXiv:1804.07007 2018.\n- Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang. \"[No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09160).\" arXiv:1804.09160 2018.\n- Sam Wiseman, Stuart M. Shieber, Alexander M. Rush. \"[Learning Neural Templates for Text Generation\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.10122).\" arXiv:1808.10122 2018.\n\n\n## Text Summarization\n  - Ryang, Seonggi, and Takeshi Abekawa. \"[Framework of automatic text summarization using reinforcement learning](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2390980).\" In Proceedings of the 2012 Joint Conference on Empirical Methods in Natural Language Processing and Computational Natural Language Learning, pp. 256-265. Association for Computational Linguistics, 2012. [not neural-based methods]\n  - King, Ben, Rahul Jha, Tyler Johnson, Vaishnavi Sundararajan, and Clayton Scott. \"[Experiments in Automatic Text Summarization Using Deep Neural Networks](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.459.8775&rep=rep1&type=pdf).\" Machine Learning (2011).\n  - Liu, Yan, Sheng-hua Zhong, and Wenjie Li. \"[Query-Oriented Multi-Document Summarization via Unsupervised Deep Learning](http:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI12\u002Fpaper\u002Fview\u002F5058\u002F5322).\" AAAI. 2012.\n  - Rioux, Cody, Sadid A. Hasan, and Yllias Chali. \"[Fear the REAPER: A System for Automatic Multi-Document Summarization with Reinforcement Learning](http:\u002F\u002Femnlp2014.org\u002Fpapers\u002Fpdf\u002FEMNLP2014075.pdf).\" In EMNLP, pp. 681-690. 2014.[not neural-based methods]\n  - PadmaPriya, G., and K. Duraiswamy. \"[An Approach For Text Summarization Using Deep Learning Algorithm](http:\u002F\u002Fthescipub.com\u002FPDF\u002Fjcssp.2014.1.9.pdf).\" Journal of Computer Science 10, no. 1 (2013): 1-9.\n  - Denil, Misha, Alban Demiraj, and Nando de Freitas. \"[Extraction of Salient Sentences from Labelled Documents](http:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6815).\" arXiv preprint arXiv:1412.6815 (2014).\n  - Kågebäck, Mikael, et al. \"[Extractive summarization using continuous vector space models](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1504).\" Proceedings of the 2nd Workshop on Continuous Vector Space Models and their Compositionality (CVSC)@ EACL. 2014.\n  - Denil, Misha, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom, and Nando de Freitas. \"[Modelling, Visualising and Summarising Documents with a Single Convolutional Neural Network](http:\u002F\u002Farxiv.org\u002Fabs\u002F1406.3830).\" arXiv preprint arXiv:1406.3830 (2014).\n  - Cao, Ziqiang, Furu Wei, Li Dong, Sujian Li, and Ming Zhou. \"[Ranking with Recursive Neural Networks and Its Application to Multi-document Summarization](http:\u002F\u002Fgana.nlsde.buaa.edu.cn\u002F~lidong\u002Faaai15-rec_sentence_ranking.pdf).\" (AAAI'2015).\n  - Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh, and Noah A. Smith. \"[Toward Abstractive Summarization Using Semantic Representations](http:\u002F\u002Fwww.cs.cmu.edu\u002F~nasmith\u002Fpapers\u002Fliu+flanigan+thomson+sadeh+smith.naacl15.pdf).\" NAACL 2015\n  - Wenpeng Yin， Yulong Pei. \"Optimizing Sentence Modeling and Selection for Document Summarization.\" IJCAI 2015\n  - He, Zhanying, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai, and Xiaofei He. \"[Document Summarization Based on Data Reconstruction](http:\u002F\u002Fcs.nju.edu.cn\u002Fzlj\u002Fpdf\u002FAAAI-2012-He.pdf).\" In AAAI. 2012.\n  - Liu, He, Hongliang Yu, and Zhi-Hong Deng. \"[Multi-Document Summarization Based on Two-Level Sparse Representation Model](http:\u002F\u002Fwww.cis.pku.edu.cn\u002Ffaculty\u002Fsystem\u002Fdengzhihong\u002Fpapers\u002FAAAI%202015_Multi-Document%20Summarization%20Based%20on%20Two-Level%20Sparse%20Representation%20Model.pdf).\" In Twenty-Ninth AAAI Conference on Artificial Intelligence. 2015.\n  - Jin-ge Yao, Xiaojun Wan, Jianguo Xiao. \"[Compressive Document Summarization via Sparse Optimization](http:\u002F\u002Fijcai.org\u002FProceedings\u002F15\u002FPapers\u002F198.pdf).\" IJCAI 2015\n  - Piji Li, Lidong Bing, Wai Lam, Hang Li, and Yi Liao. \"[Reader-Aware Multi-Document Summarization via Sparse Coding](http:\u002F\u002Farxiv.org\u002Fabs\u002F1504.07324).\" IJCAI 2015.\n  - Lopyrev, Konstantin. \"[Generating News Headlines with Recurrent Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.01712).\" arXiv preprint arXiv:1512.01712 (2015). [The first paragraph as document.]\n  - Alexander M. Rush, Sumit Chopra, Jason Weston. \"[A Neural Attention Model for Abstractive Sentence Summarization](http:\u002F\u002Farxiv.org\u002Fabs\u002F1509.00685).\" EMNLP 2015. [sentence compression]\n  - Hu, Baotian, Qingcai Chen, and Fangze Zhu. \"[LCSTS: a large scale chinese short text summarization dataset](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05865).\" arXiv preprint arXiv:1506.05865 (2015).\n  - Gulcehre, Caglar, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. \"[Pointing the Unknown Words](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08148).\" arXiv preprint arXiv:1603.08148 (2016).\n  - Nallapati, Ramesh, Bing Xiang, and Bowen Zhou. \"[Abstractive Text Summarization Using Sequence-to-Sequence RNNs and Beyond](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.06023).\" arXiv preprint arXiv:1602.06023 (2016). [sentence compression]\n  - Sumit Chopra, Alexander M. Rush and Michael Auli. \"[Abstractive Sentence Summarization with Attentive Recurrent Neural Networks](http:\u002F\u002Fharvardnlp.github.io\u002Fpapers\u002Fnaacl16_summary.pdf)\" NAACL 2016.\n  - Jiatao Gu, Zhengdong Lu, Hang Li, Victor O.K. Li. \"[Incorporating Copying Mechanism in Sequence-to-Sequence Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06393).\" ACL. (2016)\n  - Jianpeng Cheng, Mirella Lapata. \"[Neural Summarization by Extracting Sentences and Words](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07252)\". ACL. (2016)\n  - Zhang, Jianmin, Jin-ge Yao, and Xiaojun Wan. \"[Toward constructing sports news from live text commentary](http:\u002F\u002Fwww.icst.pku.edu.cn\u002Flcwm\u002Fwanxj\u002Ffiles\u002Facl16_sports.pdf).\" In Proceedings of ACL. 2016.\n  - Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei. \"[AttSum: Joint Learning of Focusing and Summarization with Neural Attention](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00125)\".  arXiv:1604.00125 (2016)\n  - Ayana, Shiqi Shen, Zhiyuan Liu, Maosong Sun. \"[Neural Headline Generation with Sentence-wise Optimization](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.01904)\". arXiv:1604.01904 (2016)\n  - Kikuchi, Yuta, Graham Neubig, Ryohei Sasano, Hiroya Takamura, and Manabu Okumura. \"[Controlling Output Length in Neural Encoder-Decoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09552).\" arXiv preprint arXiv:1609.09552 (2016).\n  - Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei and Hui Jiang. \"[Distraction-Based Neural Networks for Document Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.08462).\" IJCAI 2016.\n  - Wang, Lu, and Wang Ling. \"[Neural Network-Based Abstract Generation for Opinions and Arguments](http:\u002F\u002Fwww.ccs.neu.edu\u002Fhome\u002Fluwang\u002Fpapers\u002FNAACL2016.pdf).\" NAACL 2016.\n  - Yishu Miao, Phil Blunsom. \"[Language as a Latent Variable: Discrete Generative Models for Sentence Compression](http:\u002F\u002Farxiv.org\u002Fabs\u002F1609.07317).\" EMNLP 2016.\n  - Takase, Sho, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao, and Masaaki Nagata. \"[Neural headline generation on abstract meaning representation](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD\u002FD16\u002FD16-1112.pdf).\" EMNLP, pp. 1054-1059. 2016.\n  - Hongya Song, Zhaochun Ren, Piji Li, Shangsong Liang, Jun Ma, and Maarten de Rijke. [Summarizing Answers in Non-Factoid Community Question-Answering](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=3018704). In WSDM 2017: The 10th International Conference on Web Search and Data Mining, 2017.\n  - Wenyuan Zeng, Wenjie Luo, Sanja Fidler, Raquel Urtasun. \"[Efficient Summarization with Read-Again and Copy Mechanism](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.03382).\" arXiv preprint arXiv:1611.03382 (2016).\n  - Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, Lidong Bing. \"[Salience Estimation via Variational Auto-Encoders for Multi-Document Summarization](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002Fview\u002F14613)\". In AAAI, 2017.\n  - Ramesh Nallapati, Feifei Zhai, Bowen Zhou. [SummaRuNNer: A Recurrent Neural Network based Sequence Model for Extractive Summarization of Documents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04230). In AAAI, 2017.\n  - Ramesh Nallapati, Bowen Zhou, Mingbo Ma. \"[Classify or Select: Neural Architectures for Extractive Document Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04244).\" arXiv preprint arXiv:1611.04244 (2016).\n  - Suzuki, Jun, and Masaaki Nagata. \"[Cutting-off Redundant Repeating Generations for Neural Abstractive Summarization](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE17-2047).\" EACL 2017 (2017): 291.\n  - Jiwei Tan and Xiaojun Wan. [Abstractive Document Summarization with a Graph-Based Attentional Neural Model](). ACL, 2017.\n  - Preksha Nema, Mitesh M. Khapra, Balaraman Ravindran and Anirban Laha. [Diversity driven attention model for query-based abstractive summarization](). ACL,2017\n  - Abigail See, Peter J. Liu and Christopher D. Manning. [Get To The Point: Summarization with Pointer-Generator Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04368). ACL, 2017.\n  - Qingyu Zhou, Nan Yang, Furu Wei and Ming Zhou. [Selective Encoding for Abstractive Sentence Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.07073). ACL, 2017\n  - Maxime Peyrard and Judith Eckle-Kohler. [Supervised Learning of Automatic Pyramid for Optimization-Based Multi-Document Summarization](). ACL, 2017.\n  - Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, Shay B. Cohen. \"[Neural Extractive Summarization with Side Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04530).\" arXiv preprint arXiv:1704.04530 (2017).\n  - Romain Paulus, Caiming Xiong, Richard Socher. \"[A Deep Reinforced Model for Abstractive Summarization](https:\u002F\u002Fmetamind.io\u002Fstatic\u002Fpdf\u002Fdeep-reinforced-model-arxiv-v1.pdf).\" (2017).\n  - Shibhansh Dohare, Harish Karnick. \"[Text Summarization using Abstract Meaning Representation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01678).\" \tarXiv:1706.01678 (2017).\n  - Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev. \"[Graph-based Neural Multi-Document Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06681).\" \tarXiv:1706.06681 (2017).\n  - Piji Li, Wai Lam, Lidong Bing, and Zihao Wang. [Deep Recurrent Generative Decoder for Abstractive Text Summarization](http:\u002F\u002Flipiji.com\u002F). Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'17). Sep 2017. \n  - Piji Li, Wai Lam, Lidong Bing, Weiwei Guo, and Hang Li. [Cascaded Attention based Unsupervised Information Distillation for Compressive Summarization](http:\u002F\u002Flipiji.com\u002F). Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP'17). Sep 2017.\n  - Piji Li, Lidong Bing, Wai Lam. [Reader-Aware Multi-Document Summarization: An Enhanced Model and The First Dataset](http:\u002F\u002Fwww1.se.cuhk.edu.hk\u002F~textmine\u002Fdataset\u002Fra-mds\u002F). Proceedings of the EMNLP 2017 Workshop on New Frontiers in Summarization (EMNLP-NewSum'17). Sep 2017.\n  - Tan, Jiwei, Xiaojun Wan, and Jianguo Xiao. \"[From Neural Sentence Summarization to Headline Generation: A Coarse-to-Fine Approach](http:\u002F\u002Fstatic.ijcai.org\u002Fproceedings-2017\u002F0574.pdf).\" IJCAI 2017.\n  - Ling, Jeffrey, and Alexander M. Rush. \"[Coarse-to-Fine Attention Models for Document Summarization](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW\u002FW17\u002FW17-4505.pdf).\" EMNLP 2017 (2017): 33.\n  - Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li. \"[Faithful to the Original: Fact Aware Neural Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.04434).\" arXiv:1711.04434 (2017).\n  - Angela Fan, David Grangier, Michael Auli. \"[Controllable Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05217).\" arXiv:1711.05217 (2017).\n  - Liu, Linqing, Yao Lu, Min Yang, Qiang Qu, Jia Zhu, and Hongyan Li. \"[Generative Adversarial Network for Abstractive Text Summarization](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09357.pdf).\" arXiv preprint arXiv:1711.09357 (2017).\n  - Narayan, Shashi, Shay B. Cohen, and Mirella Lapata. \"[Ranking Sentences for Extractive Summarization with Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08636).\" arXiv preprint arXiv:1802.08636 (2018).\n  - Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi. \"[Deep Communicating Agents for Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10357).\" NAACL (2018).\n  - Chen, Wenhu, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li, and Ming Zhou. \"[Generative Bridging Network in Neural Sequence Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09152).\" NAACL (2018).\n  - Li, Piji, Lidong Bing, and Wai Lam. \"[Actor-Critic based Training Framework for Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.11070).\" arXiv preprint arXiv:1803.11070 (2018).\n  - Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian. \"[\nA Discourse-Aware Attention Model for Abstractive Summarization of Long Documents](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.05685)\".  NAACL, 2018.\n  - Yuxiang Wu, Baotian Hu. \"[Learning to Extract Coherent Summary via Deep Reinforcement Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07036).\" AAAI (2018).\n  - Jianmin Zhang, Jiwei Tan, Xiaojun Wan. \"[Towards a Neural Network Approach to Abstractive Multi-Document Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09010).\" arXiv:1804.09010  (2018).\n  - Li Wang, Junlin Yao, Yunzhe Tao, Li Zhong, Wei Liu, Qiang Du. \"[A Reinforced Topic-Aware Convolutional Sequence-to-Sequence Model for Abstractive Text Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.03616).\" IJCAI-ECAI  (2018).\n  - Yen-Chun Chen, Mohit Bansal. \"[Fast Abstractive Summarization with Reinforce-Selected Sentence Rewriting\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11080).\" arXiv:1805.11080  (2018).\n  - Song, Kaiqiang, Lin Zhao, and Fei Liu. \"[Structure-Infused Copy Mechanisms for Abstractive Summarization](http:\u002F\u002Fwww.cs.ucf.edu\u002F~feiliu\u002Fpapers\u002FCOLING2018_StructSumm.pdf).\" COLING, 2018.\n  - Keneshloo, Yaser, Tian Shi, Chandan K. Reddy, and Naren Ramakrishnan. \"[Deep Reinforcement Learning For Sequence to Sequence Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09461).\" arXiv preprint arXiv:1805.09461 (2018).\n  - Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou. \"[Sequential Copying Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02301).\" AAAI (2018).\n  - Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, Tiejun Zhao. \"[Neural Document Summarization by Jointly Learning to Score and Select Sentences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02305).\" ACL (2018).\n  - Lin, Junyang, Xu Sun, Shuming Ma, and Qi Su. \"[Global Encoding for Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.03989).\" arXiv preprint arXiv:1805.03989 (2018).\n  - Khatri, Chandra, Gyanit Singh, and Nish Parikh. \"[Abstractive and Extractive Text Summarization using Document Context Vector and Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.08000).\" arXiv preprint arXiv:1807.08000 (2018).\n  - Hsu, Wan-Ting, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang, and Min Sun. \"[A Unified Model for Extractive and Abstractive Summarization using Inconsistency Loss](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.06266).\" arXiv preprint arXiv:1805.06266 (2018).\n  - Sun, Fei, Peng Jiang, Hanxiao Sun, Changhua Pei, Wenwu Ou, and Xiaobo Wang. \"[Multi-Source Pointer Network for Product Title Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.06885).\" arXiv preprint arXiv:1808.06885 (2018).\n  - Wojciech Kryściński, Romain Paulus, Caiming Xiong, Richard Socher. \"[Improving Abstraction in Text Summarization\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07913).\" arXiv preprint arXiv:1808.07913 (2018).\n  - Zhang, Xingxing, Mirella Lapata, Furu Wei, and Ming Zhou. \"[Neural Latent Extractive Document Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07187).\" arXiv preprint arXiv:1808.07187 (2018).\n  - Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush. \"[Bottom-Up Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.10792).\" arXiv preprint arXiv:1808.10792 (2018).\n  - Yichen Jiang, Mohit Bansal. \"[Closed-Book Training to Improve Summarization Encoder Memory](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.04585).\" arXiv preprint arXiv:1809.04585 (2018).\n  - Kamal Al-Sabahi, Zhang Zuping, Yang Kang. \"[Bidirectional Attentional Encoder-Decoder Model and Bidirectional Beam Search for Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.06662).\" arXiv preprint arXiv:1809.06662 (2018).\n  - Raphael Schumann. \"[Unsupervised Abstractive Sentence Summarization using Length Controlled Variational Autoencoder](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.05233).\" arXiv preprint arXiv:1809.05233 (2018).\n  - Krishna, Kundan, and Balaji Vasan Srinivasan. \"[Generating Topic-Oriented Summaries Using Neural Attention](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1153).\" NAACL 2018.\n  - Lisa Fan, Dong Yu, Lu Wang. \"[Robust Neural Abstractive Summarization Systems and Evaluation against Adversarial Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06065).\" arXiv preprint arXiv:1810.06065 (2018).\n  - Eric Chu, Peter J. Liu. \"[Unsupervised Neural Multi-document Abstractive Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05739).\" arXiv preprint arXiv:1810.05739 (2018).\n  - Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy. \"[Deep Transfer Reinforcement Learning for Text Summarization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06667).\" arXiv preprint arXiv:1810.06667 (2018).\n  - Mahnaz Koupaee, William Yang Wang. \"[WikiHow: A Large Scale Text Summarization Dataset\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.09305).\" arXiv preprint arXiv:1810.09305 (2018).\n  - Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon. \"[Unified Language Model Pre-training for Natural Language Understanding and Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03197).\" arXiv preprint arXiv:1905.03197 (2019).\n \n### Opinion Summarization\n  - Wu, Haibing, Yiwei Gu, Shangdi Sun, and Xiaodong Gu. \"[Aspect-based Opinion Summarization with Convolutional Neural Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.09128).\" arXiv preprint arXiv:1511.09128 (2015).\n  - Irsoy, Ozan, and Claire Cardie. \"[Opinion Mining with Deep Recurrent Neural Networks](http:\u002F\u002Fanthology.aclweb.org\u002FD\u002FD14\u002FD14-1080.pdf).\" In EMNLP, pp. 720-728. 2014.\n  - Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, Wai Lam. \"[Neural Rating Regression with Abstractive Tips Generation for Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00154).\". In SIGIR, 2017.\n  \n### Video Summarization\n  - Zhou, Kaiyang, and Yu Qiao. \"[Deep Reinforcement Learning for Unsupervised Video Summarization with Diversity-Representativeness Reward](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.00054).\" arXiv preprint arXiv:1801.00054 (2017). \n  - Mahasseni, Behrooz, Michael Lam, and Sinisa Todorovic. \"[Unsupervised video summarization with adversarial lstm networks](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FMahasseni_Unsupervised_Video_Summarization_CVPR_2017_paper.pdf).\" In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.\n\n### Reading Comprehension\n - Hermann, Karl Moritz, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. \"[Teaching machines to read and comprehend](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5945-teaching-machines-to-read-and-comprehend).\" In Advances in Neural Information Processing Systems, pp. 1693-1701. 2015.\n - Hill, Felix, Antoine Bordes, Sumit Chopra, and Jason Weston. \"[The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.02301).\" arXiv preprint arXiv:1511.02301 (2015).\n - Kadlec, Rudolf, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. \"[Text Understanding with the Attention Sum Reader Network](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01547).\" arXiv preprint arXiv:1603.01547 (2016).\n - Chen, Danqi, Jason Bolton, and Christopher D. Manning. \"[A thorough examination of the cnn\u002Fdaily mail reading comprehension task](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02858).\" arXiv preprint arXiv:1606.02858 (2016).\n - Dhingra, Bhuwan, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. \"[Gated-Attention Readers for Text Comprehension](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01549).\" arXiv preprint arXiv:1606.01549 (2016).\n - Sordoni, Alessandro, Phillip Bachman, and Yoshua Bengio. \"[Iterative Alternating Neural Attention for Machine Reading](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02245).\" arXiv preprint arXiv:1606.02245 (2016).\n - Trischler, Adam, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. \"[Natural Language Comprehension with the EpiReader](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02270).\" arXiv preprint arXiv:1606.02270 (2016).\n - Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu. \"[Attention-over-Attention Neural Networks for Reading Comprehension](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.04423).\" arXiv preprint arXiv:1607.04423 (2016).\n - Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu. \"[Consensus Attention-based Neural Networks for Chinese Reading Comprehension](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02250).\" arXiv preprint arXiv:1607.02250 (2016).\n - Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey and David Berthelot. \"[WIKIREADING: A Novel Large-scale Language Understanding Task over Wikipedia](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP\u002FP16\u002FP16-1145.pdf).\" ACL (2016). pp. 1535-1545.\n  - Minghao Hu, Yuxing Peng, Xipeng Qiu. \"[Mnemonic Reader for Machine Comprehension](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02798).\" arXiv:1705.02798 (2017).\n  - Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang and Ming Zhou. \"[R-NET: Machine Reading Comprehension with Self-matching Networks](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpublication\u002Fmcr\u002F).\" ACL (2017).\n  \n\n### Sentence Modelling\n  - Kalchbrenner, Nal, Edward Grefenstette, and Phil Blunsom. \"[A convolutional neural network for modelling sentences](http:\u002F\u002Farxiv.org\u002Fabs\u002F1404.2188).\" arXiv preprint arXiv:1404.2188 (2014).\n  - Kim, Yoon. \"[Convolutional neural networks for sentence classification](http:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882).\" arXiv preprint arXiv:1408.5882 (2014).\n  - Le, Quoc V., and Tomas Mikolov. \"[Distributed representations of sentences and documents](http:\u002F\u002Farxiv.org\u002Fabs\u002F1405.4053).\" arXiv preprint arXiv:1405.4053 (2014).\n  - Yang, Zichao, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. \"[Hierarchical Attention Networks for Document Classification](http:\u002F\u002Fwww.cs.cmu.edu\u002F~diyiy\u002Fdocs\u002Fnaacl16.pdf).\" In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2016.\n\n### Reasoning\n  - Peng, Baolin, Zhengdong Lu, Hang Li, and Kam-Fai Wong. \"[Towards Neural Network-based Reasoning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1508.05508).\" arXiv preprint arXiv:1508.05508 (2015).\n  \n### Knowledge Engine\n - Bordes, Antoine, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. \"[Translating embeddings for modeling multi-relational data](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5071-translating-embeddings-for-modeling-multi-relational-data).\" In Advances in Neural Information Processing Systems, pp. 2787-2795. 2013. TransE\n - Lin, Yankai, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. \"[Neural Relation Extraction with Selective Attention over Instances](http:\u002F\u002Fnlp.csai.tsinghua.edu.cn\u002F~lzy\u002Fpublications\u002Facl2016_nre.pdf).\" ACL (2016)\n - TransXXX\n\n### Memory Networks\n - Graves, Alex, Greg Wayne, and Ivo Danihelka. \"[Neural turing machines](http:\u002F\u002Farxiv.org\u002Fabs\u002F1410.5401).\" arXiv preprint arXiv:1410.5401 (2014).\n - Weston, Jason, Sumit Chopra, and Antoine Bordes. \"[Memory networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1410.3916).\" ICLR (2014).\n - Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. \"[End-to-end memory networks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5846-end-to-end-memory-networks).\" In Advances in neural information processing systems, pp. 2440-2448. 2015.\n - Weston, Jason, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. \"[Towards ai-complete question answering: A set of prerequisite toy tasks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.05698).\" arXiv preprint arXiv:1502.05698 (2015).\n - Bordes, Antoine, Nicolas Usunier, Sumit Chopra, and Jason Weston. \"[Large-scale simple question answering with memory networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02075).\" arXiv preprint arXiv:1506.02075 (2015).\n - Kumar, Ankit, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. \"[Ask me anything: Dynamic memory networks for natural language processing](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.07285).\" arXiv preprint arXiv:1506.07285 (2015).\n - Dodge, Jesse, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. \"[Evaluating prerequisite qualities for learning end-to-end dialog systems](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06931).\" arXiv preprint arXiv:1511.06931 (2015).\n - Hill, Felix, Antoine Bordes, Sumit Chopra, and Jason Weston. \"[The Goldilocks Principle: Reading Children's Books with Explicit Memory Representations](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.02301).\" arXiv preprint arXiv:1511.02301 (2015).\n - Weston, Jason. \"[Dialog-based Language Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.06045).\" arXiv preprint arXiv:1604.06045 (2016).\n - Bordes, Antoine, and Jason Weston. \"[Learning End-to-End Goal-Oriented Dialog](http:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07683).\" arXiv preprint arXiv:1605.07683 (2016).\n - Chandar, Sarath, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. \"[Hierarchical Memory Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07427).\" arXiv preprint arXiv:1605.07427 (2016).\n - Jason Weston.\"[Memory Networks for Language Understanding](http:\u002F\u002Fwww.thespermwhale.com\u002Fjaseweston\u002Ficml2016\u002F).\" ICML Tutorial 2016\n - Tang, Yaohua, Fandong Meng, Zhengdong Lu, Hang Li, and Philip LH Yu. \"[Neural Machine Translation with External Phrase Memory](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01792).\" arXiv preprint arXiv:1606.01792 (2016).\n - Wang, Mingxuan, Zhengdong Lu, Hang Li, and Qun Liu. \"[Memory-enhanced Decoder for Neural Machine Translation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02003).\" arXiv preprint arXiv:1606.02003 (2016).\n - Xiong, Caiming, Stephen Merity, and Richard Socher. \"[Dynamic memory networks for visual and textual question answering](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01417).\" arXiv preprint arXiv:1603.01417 (2016).\n\n### Neural Structures\n - Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. \"[Highway networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1505.00387).\" arXiv preprint arXiv:1505.00387 (2015).\n - Srivastava, Rupesh K., Klaus Greff, and Jürgen Schmidhuber. \"[Training very deep networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1507.06228).\" In Advances in Neural Information Processing Systems, pp. 2368-2376. 2015.\n - Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. \"[Pointer networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03134).\" In Advances in Neural Information Processing Systems, pp. 2692-2700. 2015.\n - Rasmus, Antti, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. \"[Semi-supervised learning with ladder networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1507.02672).\" In Advances in Neural Information Processing Systems, pp. 3546-3554. 2015.\n - Bengio, Samy, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. \"[Scheduled sampling for sequence prediction with recurrent neural networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03099).\" In Advances in Neural Information Processing Systems, pp. 1171-1179. 2015.\n - He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. \"[Deep Residual Learning for Image Recognition](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385).\" arXiv preprint arXiv:1512.03385 (2015).\n - He, Kaiming. \"[Tutorial: Deep\tResidual\tNetworks: Deep\tLearning\tGets\tWay Deeper](http:\u002F\u002Ficml.cc\u002F2016\u002Ftutorials\u002Ficml2016_tutorial_deep_residual_networks_kaiminghe.pdf).\" ICML\t2016\ttutorial.\n - Courbariaux, Matthieu, and Yoshua Bengio. \"[Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.02830).\" arXiv preprint arXiv:1602.02830 (2016). \n - Jiatao Gu, Zhengdong Lu, Hang Li, Victor O.K. Li. \"[Incorporating Copying Mechanism in Sequence-to-Sequence Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06393).\" ACL (2016)\n - Gulcehre, Caglar, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. \"[Pointing the Unknown Words](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08148).\" arXiv preprint arXiv:1603.08148 (2016).\n - Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. \"[Learning to compose neural networks for question answering](http:\u002F\u002Farxiv.org\u002Fabs\u002F1601.01705).\" NAACL 2016.\n - Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, Jürgen Schmidhuber. \"[Recurrent Highway Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.03474).\" arXiv preprint  arXiv:1607.03474 (2016).\n - Zhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, William W. Cohen. \"[Review Networks for Caption Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07912).\" arXiv preprint  arXiv:1605.07912 (2016).\n - Xiang Li, Tao Qin, Jian Yang, Tie-Yan Liu. \"[LightRNN: Memory and Computation-Efficient Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09893).\" arXiv preprint  arXiv:1610.09893 (2016).\n - Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li. \"[Neural Machine Translation with Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01874).\" arXiv preprint  arXiv:1611.01874 (2016).\n - Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, Wei-Ying Ma. \"[Dual Learning for Machine Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.00179).\" arXiv preprint  arXiv:1611.00179 (2016).\n  - Bahdanau, Dzmitry, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. \"[An actor-critic algorithm for sequence prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07086).\" arXiv preprint arXiv:1607.07086 (2016).\n - Kannan, Anjuli, and Oriol Vinyals. \"[Adversarial evaluation of dialogue models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.08198).\" arXiv preprint arXiv:1701.08198 (2017).\n - Kawthekar, Prasad, Raunaq Rewari, and Suvrat Bhooshan. \"[Evaluating Generative Models for Text Generation](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Freports\u002F2737434.pdf).\"\n - Li, Jiwei, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. \"[Adversarial Learning for Neural Dialogue Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06547).\" arXiv preprint arXiv:1701.06547 (2017).\n - Yang, Zhen, Wei Chen, Feng Wang, and Bo Xu. \"[Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.04887).\" arXiv preprint arXiv:1703.04887 (2017).\n - Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, Tie-Yan Liu. \"[Adversarial Neural Machine Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.06933).\" IJCAI (2017).\n - Liu, Pengfei, Xipeng Qiu, and Xuanjing Huang. \"[Adversarial Multi-task Learning for Text Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05742).\" arXiv preprint arXiv:1704.05742 (2017).\n - Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin. \"[Convolutional Sequence to Sequence Learning (https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.03122).\"  arXiv:1705.03122 (2017).\n - Lamb, Alex M., Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C. Courville, and Yoshua Bengio. \"[Professor forcing: A new algorithm for training recurrent networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09038).\" In Advances In Neural Information Processing Systems, pp. 4601-4609. 2016.\n - Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. \"[Stochastic backpropagation and approximate inference in deep generative models](http:\u002F\u002Farxiv.org\u002Fabs\u002F1401.4082).\" arXiv preprint arXiv:1401.4082 (2014).\n - Kingma, Diederik P., and Max Welling. \"[Auto-encoding variational bayes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114).\" arXiv preprint arXiv:1312.6114 (2013).\n - Fabius, Otto, and Joost R. van Amersfoort. \"[Variational recurrent auto-encoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6581).\" arXiv preprint arXiv:1412.6581 (2014).\n - Bayer, Justin, and Christian Osendorfer. \"[Learning stochastic recurrent networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1411.7610).\" arXiv preprint arXiv:1411.7610 (2014).\n - Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. \"[Generating sentences from a continuous space](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06349).\" arXiv preprint arXiv:1511.06349 (2015).\n - Gregor, Karol, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. \"[DRAW: A recurrent neural network for image generation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.04623).\" arXiv preprint arXiv:1502.04623 (2015).\n - Makhzani, Alireza, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. \"[Adversarial autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644).\" arXiv preprint arXiv:1511.05644 (2015).\n - Johnson, Matthew J., David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams. \"[Composing graphical models with neural networks for structured representations and fast inference](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06277).\" arXiv preprint arXiv:1603.06277 (2016).\n - Doersch, Carl. \"[Tutorial on Variational Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.05908).\" arXiv preprint arXiv:1606.05908 (2016).\n - Chung, Junyoung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Bengio. \"[A recurrent latent variable model for sequential data](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02216).\" In Advances in neural information processing systems, pp. 2980-2988. 2015.\n - Eslami, S. M., Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton. \"[Attend, Infer, Repeat: Fast Scene Understanding with Generative Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08575).\" arXiv preprint arXiv:1603.08575 (2016).\n - Shengjia Zhao, Jiaming Song, Stefano Ermon. \"[InfoVAE: Information Maximizing Variational Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02262).\" arXiv:1706.02262 (2017).\n - Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. \"[Generative adversarial nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661).\" In Advances in Neural Information Processing Systems, pp. 2672-2680. 2014\n - Radford, Alec, Luke Metz, and Soumith Chintala. \"[Unsupervised representation learning with deep convolutional generative adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434).\" arXiv preprint arXiv:1511.06434 (2015).\n - Denton, Emily L., Soumith Chintala, and Rob Fergus. \"[Deep Generative Image Models using a￼ Laplacian Pyramid of Adversarial Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05751).\" In Advances in neural information processing systems, pp. 1486-1494. 2015.\n - Dosovitskiy, Alexey, Jost Tobias Springenberg, and Thomas Brox. \"[Learning to generate chairs with convolutional neural networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1411.5928).\" In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1538-1546. 2015.\n - Mathieu, Michael, Camille Couprie, and Yann LeCun. \"[Deep multi-scale video prediction beyond mean square error](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05440).\" arXiv preprint arXiv:1511.05440 (2015).\n - Salimans, Tim, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. \"[Improved Techniques for Training GANs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498).\" arXiv preprint arXiv:1606.03498 (2016).\n - Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. \"[InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657).\" arXiv preprint arXiv:1606.03657 (2016).\n - Im, Daniel Jiwoong, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. \"[Generating images with recurrent adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.05110).\" arXiv preprint arXiv:1602.05110 (2016).\n - Yu, Lantao, Weinan Zhang, Jun Wang, and Yong Yu. \"[SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient](http:\u002F\u002Farxiv.org\u002Fabs\u002F1609.05473).\" arXiv preprint arXiv:1609.05473 (2016).\n - Augustus Odena, Christopher Olah, Jonathon Shlens. \"[Conditional Image Synthesis With Auxiliary Classifier GANs](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585).\" arXiv preprint arXiv:1610.09585 (2016).\n - Ian Goodfellow. \"[NIPS Tutorial: GANs](http:\u002F\u002Fwww.iangoodfellow.com\u002Fslides\u002F2016-12-04-NIPS.pdf)\", NIPS, 2016\n - Che, Tong, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. \"[Maximum-Likelihood Augmented Discrete Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07983).\" arXiv preprint arXiv:1702.07983 (2017).\n - Junbo (Jake) Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, Yann LeCun. \"[Adversarially Regularized Autoencoders for Generating Discrete Structures](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04223).\" arXiv preprint arXiv:1706.04223 (2017).\n - \tMike Lewis  \tDenis Yarats  \tYann N. Dauphin  \tDevi Parikh  \tDhruv Batra . \"[ Deal or No Deal? End-to-End Learning for Negotiation Dialogues](http:\u002F\u002Fs3.amazonaws.com\u002Fend-to-end-negotiator\u002Fend-to-end-negotiator.pdf).\" (2017).\n - Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir Mohamed. \"[Variational Approaches for Auto-Encoding Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04987).\" arXiv preprint arXiv:1706.04987 (2017).\n - Goyal, Prasoon, Zhiting Hu, Xiaodan Liang, Chenyu Wang, and Eric Xing. \"[Nonparametric Variational Auto-encoders for Hierarchical Representation Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.07027.pdf).\" arXiv preprint arXiv:1703.07027 (2017).\n - Sabour, Sara, Nicholas Frosst, and Geoffrey Hinton. \"[Dynamic Routing between Capsules](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09829).\" (2017).\n - Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. \"[Attention is all you need](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7181-attention-is-all-you-need).\" NIPS. 2017.\n \n#### Architecture Search\n- Frankle, Jonathan, and Michael Carbin. \"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\" arXiv preprint arXiv:1803.03635 (2018).\n- Xie, Saining, Alexander Kirillov, Ross Girshick, and Kaiming He. \"Exploring Randomly Wired Neural Networks for Image Recognition.\" arXiv preprint arXiv:1904.01569 (2019).\n- So, David R., Chen Liang, and Quoc V. Le. \"The Evolved Transformer.\" arXiv preprint arXiv:1901.11117 (2019).\n- Chenguang Wang, Mu Li, Alexander J. Smola. \"Language Models with Transformers.\" arXiv preprint arXiv:1904.09408 (2019).\n \n### Recommendation System\n- Salakhutdinov, Ruslan, Andriy Mnih, and Geoffrey Hinton. \"[Restricted Boltzmann machines for collaborative filtering](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=1273596).\" In Proceedings of the 24th international conference on Machine learning, pp. 791-798. ACM, 2007.\n- Wang, Hao, Xingjian Shi, and Dit-Yan Yeung. \"[Relational Stacked Denoising Autoencoder for Tag Recommendation](http:\u002F\u002Fwww.wanghao.in\u002Fpaper\u002FAAAI15_RSDAE.pdf).\" In AAAI, pp. 3052-3058. 2015.\n- Wang, Hao, Naiyan Wang, and Dit-Yan Yeung. \"[Collaborative deep learning for recommender systems](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2783273).\" In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235-1244. ACM, 2015.\n- Covington, Paul, Jay Adams, and Emre Sargin. \"[Deep neural networks for youtube recommendations](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2959190).\" In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 191-198. ACM, 2016.\n- Devooght, Robin, and Hugues Bersini. \"[Collaborative Filtering with Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.07400).\" arXiv preprint arXiv:1608.07400 (2016).\n- Wang, Hao, S. H. I. Xingjian, and Dit-Yan Yeung. \"[Collaborative recurrent autoencoder: Recommend while learning to fill in the blanks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6163-collaborative-recurrent-autoencoder-recommend-while-learning-to-fill-in-the-blanks).\" In Advances in Neural Information Processing Systems, pp. 415-423. 2016.\n- Tang, Jian, Yifan Yang, Sam Carton, Ming Zhang, and Qiaozhu Mei. \"[Context-aware Natural Language Generation with Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09900).\" arXiv preprint arXiv:1611.09900 (2016).\n- Zhang, Fuzheng, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. \"[Collaborative Knowledge Base Embedding for Recommender Systems](http:\u002F\u002Fwww.kdd.org\u002Fkdd2016\u002Fsubtopic\u002Fview\u002Fcollaborative-knowledge-base-embedding-for-recommender-systems).\" KDD, 2016. \n- Dong, Li, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke XuΤ. \"[Learning to Generate Product Reviews from Attributes](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE\u002FE17\u002FE17-1059.pdf).\" EACL, 2017.\n- He, Xiangnan. \"[Neural Collaborative Filtering](http:\u002F\u002Fwww.comp.nus.edu.sg\u002F~xiangnan\u002Fpapers\u002Fncf.pdf).\" WWW, 2017\n- Wu, Chao-Yuan, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. \"[Recurrent Recommender Networks](http:\u002F\u002Falexbeutel.com\u002Fpapers\u002Frrn_wsdm2017.pdf).\" Training 10, no. 2: 10-1.2017\n- Radford, Alec, Rafal Jozefowicz, and Ilya Sutskever. \"[Learning to generate reviews and discovering sentiment](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.01444.pdf).\" arXiv preprint arXiv:1704.01444 (2017).\n- Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, Wai Lam. \"[Neural Rating Regression with Abstractive Tips Generation for Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00154).\". In SIGIR, pp xx-xx. 2017.\n\n### Network Representation Learning\n - [Must-read papers on network representation learning (NRL)\u002Fnetwork embedding (NE)](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FNRLPapers) \n\n### Music Generation\n - [Using machine learning to generate music](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fusing-machine-learning-to-generate-music)\n \n### Computational Biology\n - [Awesome DeepBio](https:\u002F\u002Fgithub.com\u002Fgokceneraslan\u002Fawesome-deepbio) by Gökçen Eraslan\n\n### GO\n - Silver, David, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser et al. \"[Mastering the game of Go with deep neural networks and tree search](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv529\u002Fn7587\u002Ffull\u002Fnature16961.html).\" Nature 529, no. 7587 (2016): 484-489.\n - Tian, Yuandong, and Yan Zhu. \"[Better Computer Go Player with Neural Network and Long-term Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06410).\" arXiv preprint arXiv:1511.06410 (2015).\n \n### Stock Prediction\n  - Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan. \"Deep Learning for Event-Driven Stock Prediction\". IJCAI 2015.\n  - Si, Jianfeng, Arjun Mukherjee, Bing Liu, Sinno Jialin Pan, Qing Li, and Huayi Li. \"[Exploiting Social Relations and Sentiment for Stock Prediction](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD14-1120).\" EMNLP 2014.\n  - Ding, Xiao, Yue Zhang, Ting Liu, and Junwen Duan. \"[Using Structured Events to Predict Stock Price Movement: An Empirical Investigation](http:\u002F\u002Fanthology.aclweb.org\u002FD\u002FD14\u002FD14-1148.pdf).\" EMNLP 2014.\n  - Bollen, Johan, Huina Mao, and Xiaojun Zeng. \"[Twitter mood predicts the stock market](http:\u002F\u002Farxiv.org\u002Fabs\u002F1010.3003).\" Journal of Computational Science 2, no. 1 (2011): 1-8.\n  - Hengjian Jia. \"[Investigation Into The Effectiveness Of Long Short Term Memory Networks For Stock Price Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07893).\" arXiv:1603.07893. (2016)\n","### 创业公司\n  - [机器学习、深度学习、计算机视觉、大数据创业公司 - Startups in AI](https:\u002F\u002Fgithub.com\u002Flipiji\u002FAIStartups)\n\n## 深度强化学习\n - David Silver. \"[Tutorial: Deep Reinforcement Learning](http:\u002F\u002Ficml.cc\u002F2016\u002Ftutorials\u002Fdeep_rl_tutorial.pdf).\" ICML 2016.\n - David Silver 的课程. \"[Reinforcement Learning](http:\u002F\u002Fwww0.cs.ucl.ac.uk\u002Fstaff\u002FD.Silver\u002Fweb\u002FTeaching.html)\". 2015.\n - Bahdanau, Dzmitry, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. \"[An Actor-Critic Algorithm for Sequence Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07086).\" arXiv preprint arXiv:1607.07086 (2016).\n - Li, Jiwei, Will Monroe, Alan Ritter, and Dan Jurafsky. \"[Deep Reinforcement Learning for Dialogue Generation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01541).\" arXiv preprint arXiv:1606.01541 (2016).\n - Pathak, Deepak, Pulkit Agrawal, Alexei A. Efros, and Trevor Darrell. \"[Curiosity-driven Exploration by Self-supervised Prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.05363).\" arXiv preprint arXiv:1705.05363 (2017).\n - Keneshloo, Yaser, Tian Shi, Chandan K. Reddy, and Naren Ramakrishnan. \"[Deep Reinforcement Learning For Sequence to Sequence Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09461).\" arXiv preprint arXiv:1805.09461 (2018).\n\n## 对话系统\n- Jiang, Shaojie, and Maarten de Rijke. \"[Why are Sequence-to-Sequence Models So Dull?](https:\u002F\u002Fstaff.fnwi.uva.nl\u002Fm.derijke\u002Fwp-content\u002Fpapercite-data\u002Fpdf\u002Fjiang-why-2018.pdf).\" report, 2018.\n- Eric Chu, Prashanth Vijayaraghavan, Deb Roy. \"[Learning Personas from Dialogue with Attentive Memory Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.08717).\" EMNLP (2018).\n- Ruizhe Li, Chenghua Lin, Matthew Collinson, Xiao Li, Guanyi Chen. \"[A Dual-Attention Hierarchical Recurrent Neural Network for Dialogue Act Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.09154).\"  arXiv:1810.09154 (2018).\n\n#### 任务导向型对话\n- Wen, Tsung-Hsien, David Vandyke, Nikola Mrksic, Milica Gasic, Lina M. Rojas-Barahona, Pei-Hao Su, Stefan Ultes, and Steve Young. \"[A network-based end-to-end trainable task-oriented dialogue system](https:\u002F\u002Farxiv.org\u002Fabs\u002F1604.04562).\" arXiv preprint arXiv:1604.04562 (2016).\n- Li, Xiujun, Yun-Nung Chen, Lihong Li, Jianfeng Gao, and Asli Celikyilmaz. \"[End-to-end task-completion neural dialogue systems](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.01008).\" arXiv preprint arXiv:1703.01008 (2017).\n- Li, Xiujun, Zachary C. Lipton, Bhuwan Dhingra, Lihong Li, Jianfeng Gao, and Yun-Nung Chen. \"[A user simulator for task-completion dialogues](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.05688).\" arXiv preprint arXiv:1612.05688 (2016).\n- Yan, Zhao, Nan Duan, Peng Chen, Ming Zhou, Jianshe Zhou, and Zhoujun Li. \"[Building Task-Oriented Dialogue Systems for Online Shopping](http:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002FviewPaper\u002F14261).\" In AAAI, pp. 4618-4626. 2017.\n- Peng, Baolin, Xiujun Li, Jianfeng Gao, Jingjing Liu, and Kam-Fai Wong. \"[Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP18-1203).\" ACL, vol. 1, pp. 2182-2192. 2018.\n- Janarthanan Rajendran, Jatin Ganhotra, Satinder Singh, Lazaros Polymenakos. \"[Learning End-to-End Goal-Oriented Dialog with Multiple Answers](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.09996).\" arXiv preprint arXiv:1808.09996 (2018).\n\n## 文本生成\n- Rennie, Steven J., Etienne Marcheret, Youssef Mroueh, Jarret Ross, and Vaibhava Goel. \"[Self-critical sequence training for image captioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.00563).\" arXiv preprint arXiv:1612.00563 (2016).\n- Lin, Kevin, Dianqi Li, Xiaodong He, Zhengyou Zhang, and Ming-Ting Sun. \"[Adversarial Ranking for Language Generation](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1705.11001.pdf).\" arXiv preprint arXiv:1705.11001 (2017).\n- Zhang, Li, Flood Sung, Feng Liu, Tao Xiang, Shaogang Gong, Yongxin Yang, and Timothy M. Hospedales. \"[Actor-Critic Sequence Training for Image Captioning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09601).\" arXiv preprint arXiv:1706.09601 (2017).\n- Wiseman, Sam, Stuart M. Shieber, and Alexander M. Rush. \"[Challenges in Data-to-Document Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.08052).\" arXiv preprint arXiv:1707.08052 (2017).\n- Lebret, Rémi, David Grangier, and Michael Auli. \"[Neural text generation from structured data with application to the biography domain](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07771).\" arXiv preprint arXiv:1603.07771 (2016).\n- Chisholm, Andrew, Will Radford, and Ben Hachey. \"[Learning to generate one-sentence biographies from Wikidata](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.06235).\" arXiv preprint arXiv:1702.06235 (2017).\n- Sha, Lei, Lili Mou, Tianyu Liu, Pascal Poupart, Sujian Li, Baobao Chang, and Zhifang Sui. \"[Order-Planning Neural Text Generation From Structured Data](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.00155).\" arXiv preprint arXiv:1709.00155 (2017).\n- Jiaxian Guo, Sidi Lu, Han Cai, Weinan Zhang, Yong Yu, Jun Wang. \"[Long Text Generation via Adversarial Training with Leaked Information](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08624).\" arXiv preprint  arXiv:1709.08624 (2017).\n- Guu, Kelvin, Tatsunori B. Hashimoto, Yonatan Oren, and Percy Liang. \"[Generating Sentences by Editing Prototypes](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.08878).\" arXiv preprint arXiv:1709.08878 (2017).\n- Tianyu Liu, Kexiang Wang, Lei Sha, Baobao Chang, Zhifang Sui. \"[Table-to-text Generation by Structure-aware Seq2seq Learnings](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.09724).\" arXiv preprint arXiv:1711.09724 (2017).\n- Kahou, Samira Ebrahimi, Adam Atkinson, Vincent Michalski, Akos Kadar, Adam Trischler, and Yoshua Bengio. \"[FigureQA: An Annotated Figure Dataset for Visual Reasoning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.07300).\" arXiv preprint arXiv:1710.07300 (2017).\n- Murakami, Soichiro, Akihiko Watanabe, Akira Miyazawa, Keiichi Goshima, Toshihiko Yanase, Hiroya Takamura, and Yusuke Miyao. \"[Learning to Generate Market Comments from Stock Prices](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP17-1126).\" In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), vol. 1, pp. 1374-1384. 2017.\n- Mueller, Jonas, David Gifford, and Tommi Jaakkola. \"[Sequence to better sequence: continuous revision of combinatorial structures](http:\u002F\u002Fproceedings.mlr.press\u002Fv70\u002Fmueller17a.html).\" In International Conference on Machine Learning, pp. 2536-2544. 2017.\n- Peter J. Liu, Mohammad Saleh, Etienne Pot, Ben Goodrich, Ryan Sepassi, Lukasz Kaiser, Noam Shazeer. \"[Generating Wikipedia by Summarizing Long Sequences](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.10198).\" ICLR 2018.\n- Clark, Elizabeth, Anne Spencer Ross, Chenhao Tan, Yangfeng Ji, and Noah A. Smith. \"[Creative Writing with a Machine in the Loop: Case Studies on Slogans and Stories](https:\u002F\u002Fhomes.cs.washington.edu\u002F~ansross\u002Fpapers\u002Fiui2018-creativewriting.pdf).\" (2018).\n- Gehrmann, Sebastian, S. E. A. S. Harvard, Falcon Z. Dai, Henry Elder, and Alexander M. Rush. \"[End-to-End Content and Plan Selection for Natural Language Generation](https:\u002F\u002Fscholar.harvard.edu\u002Ffiles\u002Fgehrmann\u002Ffiles\u002Fe2e-harvardnlp.pdf).\"\n- Juncen Li, Robin Jia, He He, Percy Liang. \"[Delete, Retrieve, Generate: A Simple Approach to Sentiment and Style Transfer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.06437).\" arXiv:1804.06437 2018.\n- Yi Liao, Lidong Bing, Piji Li, Shuming Shi, Wai Lam, Tong Zhang. \"[Incorporating Pseudo-Parallel Data for Quantifiable Sequence Editing](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07007).\" arXiv:1804.07007 2018.\n- Xin Wang, Wenhu Chen, Yuan-Fang Wang, William Yang Wang. \"[No Metrics Are Perfect: Adversarial Reward Learning for Visual Storytelling](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09160).\" arXiv:1804.09160 2018.\n- Sam Wiseman, Stuart M. Shieber, Alexander M. Rush. \"[Learning Neural Templates for Text Generation\n](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.10122).\" arXiv:1808.10122 2018.\n\n## 文本摘要\n  - Ryang, Seonggi 和 Takeshi Abekawa。\"[使用强化学习的自动文本摘要框架](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2390980)。\" 载于《2012年自然语言处理经验方法会议与计算自然语言学习联合会议论文集》，第256-265页。计算语言学协会，2012年。[非基于神经网络的方法]\n  - King, Ben, Rahul Jha, Tyler Johnson, Vaishnavi Sundararajan 和 Clayton Scott。\"[使用深度神经网络的自动文本摘要实验](http:\u002F\u002Fciteseerx.ist.psu.edu\u002Fviewdoc\u002Fdownload?doi=10.1.1.459.8775&rep=rep1&type=pdf)。\" 机器学习（2011年）。\n  - Liu, Yan, Sheng-hua Zhong 和 Wenjie Li。\"[通过无监督深度学习的面向查询的多文档摘要](http:\u002F\u002Fwww.aaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI12\u002Fpaper\u002Fview\u002F5058\u002F5322)。\" AAAI。2012年。\n  - Rioux, Cody, Sadid A. Hasan 和 Yllias Chali。\"[Fear the REAPER: 一个使用强化学习的自动多文档摘要系统](http:\u002F\u002Femnlp2014.org\u002Fpapers\u002Fpdf\u002FEMNLP2014075.pdf)。\" 载于EMNLP，第681-690页。2014年。[非基于神经网络的方法]\n  - PadmaPriya, G. 和 K. Duraiswamy。\"[一种使用深度学习算法的文本摘要方法](http:\u002F\u002Fthescipub.com\u002FPDF\u002Fjcssp.2014.1.9.pdf)。\" 计算机科学杂志 10, 第1期（2013年）：1-9。\n  - Denil, Misha, Alban Demiraj 和 Nando de Freitas。\"[从标记文档中提取显著句子](http:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6815)。\" arXiv预印本 arXiv:1412.6815 (2014年)。\n  - Kågebäck, Mikael 等人。\"[使用连续向量空间模型的抽取式摘要](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW14-1504)。\" 第二届连续向量空间模型及其组合性研讨会（CVSC）@ EACL论文集。2014年。\n  - Denil, Misha, Alban Demiraj, Nal Kalchbrenner, Phil Blunsom 和 Nando de Freitas。\"[使用单一卷积神经网络建模、可视化和摘要文档](http:\u002F\u002Farxiv.org\u002Fabs\u002F1406.3830)。\" arXiv预印本 arXiv:1406.3830 (2014年)。\n  - Cao, Ziqiang, Furu Wei, Li Dong, Sujian Li 和 Ming Zhou。\"[使用递归神经网络的排序及其在多文档摘要中的应用](http:\u002F\u002Fgana.nlsde.buaa.edu.cn\u002F~lidong\u002Faaai15-rec_sentence_ranking.pdf)。\" (AAAI'2015)。\n  - Fei Liu, Jeffrey Flanigan, Sam Thomson, Norman Sadeh 和 Noah A. Smith。\"[使用语义表示进行抽象摘要](http:\u002F\u002Fwww.cs.cmu.edu\u002F~nasmith\u002Fpapers\u002Fliu+flanigan+thomson+sadeh+smith.naacl15.pdf)。\" NAACL 2015。\n  - Wenpeng Yin, Yulong Pei。\"[优化文档摘要的句子建模和选择](http:\u002F\u002Fwww.ijcai.org\u002FProceedings\u002F15\u002FPapers\u002F188.pdf)。\" IJCAI 2015。\n  - He, Zhanying, Chun Chen, Jiajun Bu, Can Wang, Lijun Zhang, Deng Cai 和 Xiaofei He。\"[基于数据重建的文档摘要](http:\u002F\u002Fcs.nju.edu.cn\u002Fzlj\u002Fpdf\u002FAAAI-2012-He.pdf)。\" 载于AAAI。2012年。\n  - Liu, He, Hongliang Yu 和 Zhi-Hong Deng。\"[基于两级稀疏表示模型的多文档摘要](http:\u002F\u002Fwww.cis.pku.edu.cn\u002Ffaculty\u002Fsystem\u002Fdengzhihong\u002Fpapers\u002FAAAI%202015_Multi-Document%20Summarization%20Based%20on%20Two-Level%20Sparse%20Representation%20Model.pdf)。\" 载于第二十九届AAAI人工智能会议。2015年。\n  - Jin-ge Yao, Xiaojun Wan, Jianguo Xiao。\"[通过稀疏优化的压缩文档摘要](http:\u002F\u002Fijcai.org\u002FProceedings\u002F15\u002FPapers\u002F198.pdf)。\" IJCAI 2015。\n  - Piji Li, Lidong Bing, Wai Lam, Hang Li 和 Yi Liao。\"[通过稀疏编码的读者感知多文档摘要](http:\u002F\u002Farxiv.org\u002Fabs\u002F1504.07324)。\" IJCAI 2015。\n  - Lopyrev, Konstantin。\"[使用循环神经网络生成新闻标题](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.01712)。\" arXiv预印本 arXiv:1512.01712 (2015年)。[将第一段作为文档。]\n  - Alexander M. Rush, Sumit Chopra, Jason Weston。\"[用于抽象句子摘要的神经注意力模型](http:\u002F\u002Farxiv.org\u002Fabs\u002F1509.00685)。\" EMNLP 2015。[句子压缩]\n  - Hu, Baotian, Qingcai Chen 和 Fangze Zhu。\"[LCSTS: 一个大规模中文短文本摘要数据集](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05865)。\" arXiv预印本 arXiv:1506.05865 (2015年)。\n  - Gulcehre, Caglar, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou 和 Yoshua Bengio。\"[指向未知词](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08148)。\" arXiv预印本 arXiv:1603.08148 (2016年)。\n  - Nallapati, Ramesh, Bing Xiang 和 Bowen Zhou。\"[使用序列到序列RNN及更高阶方法的抽象文本摘要](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.06023)。\" arXiv预印本 arXiv:1602.06023 (2016年)。[句子压缩]\n  - Sumit Chopra, Alexander M. Rush 和 Michael Auli。\"[使用注意力循环神经网络的抽象句子摘要](http:\u002F\u002Fharvardnlp.github.io\u002Fpapers\u002Fnaacl16_summary.pdf)\" NAACL 2016。\n  - Jiatao Gu, Zhengdong Lu, Hang Li, Victor O.K. Li。\"[在序列到序列学习中融入复制机制](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06393)。\" ACL。(2016年)\n  - Jianpeng Cheng, Mirella Lapata。\"[通过提取句子和词语的神经摘要](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07252)\"。 ACL。(2016年)\n  - Zhang, Jianmin, Jin-ge Yao 和 Xiaojun Wan。\"[从实时文本评论构建体育新闻](http:\u002F\u002Fwww.icst.pku.edu.cn\u002Flcwm\u002Fwanxj\u002Ffiles\u002Facl16_sports.pdf)。\" 载于ACL论文集。2016年。\n  - Ziqiang Cao, Wenjie Li, Sujian Li, Furu Wei。\"[AttSum: 使用神经注意力联合学习聚焦和摘要](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.00125)\"。 arXiv:1604.00125 (2016年)\n  - Ayana, Shiqi Shen, Zhiyuan Liu, Maosong Sun。\"[具有句子级优化的神经标题生成](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.01904)\"。 arXiv:1604.01904 (2016年)\n  - Kikuchi, Yuta, Graham Neubig, Ryohei Sasano, Hiroya Takamura 和 Manabu Okumura。\"[在神经编码器-解码器中控制输出长度](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.09552)。\" arXiv预印本 arXiv:1609.09552 (2016年)。\n  - Qian Chen, Xiaodan Zhu, Zhenhua Ling, Si Wei 和 Hui Jiang。\"[用于文档摘要的基于分心的神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.08462)。\" IJCAI 2016。\n  - Wang, Lu 和 Wang Ling。\"[基于神经网络的意见和论点抽象生成](http:\u002F\u002Fwww.ccs.neu.edu\u002Fhome\u002Fluwang\u002Fpapers\u002FNAACL2016.pdf)。\" NAACL 2016。\n  - Yishu Miao, Phil Blunsom。\"[语言作为潜在变量：用于句子压缩的离散生成模型](http:\u002F\u002Farxiv.org\u002Fabs\u002F1609.07317)。\" EMNLP 2016。\n  - Takase, Sho, Jun Suzuki, Naoaki Okazaki, Tsutomu Hirao 和 Masaaki Nagata。\"[基于抽象意义表示的神经标题生成](https:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD\u002FD16\u002FD16-1112.pdf)。\" EMNLP，第1054-1059页。2016年。\n  - Hongya Song, Zhaochun Ren, Piji Li, Shangsong Liang, Jun Ma 和 Maarten de Rijke。[在非事实性社区问答中总结答案](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=3018704)。 载于WSDM 2017：第十届网络搜索与数据挖掘国际会议，2017年。\n  - Wenyuan Zeng, Wenjie Luo, Sanja Fidler, Raquel Urtasun。\"[通过重读和复制机制进行高效摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.03382)。\" arXiv预印本 arXiv:1611.03382 (2016年)。\n  - Piji Li, Zihao Wang, Wai Lam, Zhaochun Ren, Lidong Bing。\"[通过变分自编码器进行多文档摘要的显著性估计](https:\u002F\u002Faaai.org\u002Focs\u002Findex.php\u002FAAAI\u002FAAAI17\u002Fpaper\u002Fview\u002F14613)\"。 载于AAAI，2017年。\n  - Ramesh Nallapati, Feifei Zhai, Bowen Zhou。[SummaRuNNer: 一个基于循环神经网络的序列模型，用于文档的抽取式摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04230)。 载于AAAI，2017年。\n  - Ramesh Nallapati, Bowen Zhou, Mingbo Ma。\"[分类或选择：用于抽取式文档摘要的神经架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.04244)。\" arXiv预印本 arXiv:1611.04244 (2016年)。\n  - Suzuki, Jun 和 Masaaki Nagata。\"[为神经抽象摘要切断冗余重复生成](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE17-2047)。\" EACL 2017 (2017年)：291。\n  - Jiwei Tan 和 Xiaojun Wan。[基于图注意神经模型的抽象文档摘要]()。 ACL，2017年。\n  - Preksha Nema, Mitesh M. Khapra, Balaraman Ravindran 和 Anirban Laha。[用于基于查询的抽象摘要的多样性驱动注意力模型]()。 ACL，2017年。\n  - Abigail See, Peter J. Liu 和 Christopher D. Manning。[直击要点：使用指针生成器网络进行摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04368)。 ACL，2017年。\n  - Qingyu Zhou, Nan Yang, Furu Wei 和 Ming Zhou。[用于抽象句子摘要的选择性编码](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.07073)。 ACL，2017年。\n  - Maxime Peyrard 和 Judith Eckle-Kohler。[用于基于优化的多文档摘要的自动金字塔监督学习]()。 ACL，2017年。\n  - Shashi Narayan, Nikos Papasarantopoulos, Mirella Lapata, Shay B. Cohen。\"[具有辅助信息的神经抽取式摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.04530)。\" arXiv预印本 arXiv:1704.04530 (2017年)。\n  - Romain Paulus, Caiming Xiong, Richard Socher。\"[用于抽象摘要的深度强化模型](https:\u002F\u002Fmetamind.io\u002Fstatic\u002Fpdf\u002Fdeep-reinforced-model-arxiv-v1.pdf)。\" (2017年)。\n  - Shibhansh Dohare, Harish Karnick。\"[使用抽象意义表示的文本摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.01678)。\" \tarXiv:1706.01678 (2017年)。\n  - Michihiro Yasunaga, Rui Zhang, Kshitijh Meelu, Ayush Pareek, Krishnan Srinivasan, Dragomir Radev。\"[基于图的神经多文档摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.06681)。\" \tarXiv:1706.06681 (2017年)。\n  - Piji Li, Wai Lam, Lidong Bing 和 Zihao Wang。[用于抽象文本摘要的深度循环生成解码器](http:\u002F\u002Flipiji.com\u002F)。 自然语言处理经验方法会议（EMNLP'17）论文集。2017年9月。\n  - Piji Li, Wai Lam, Lidong Bing, Weiwei Guo 和 Hang Li。[用于压缩摘要的基于级联注意力的无监督信息蒸馏](http:\u002F\u002Flipiji.com\u002F)。 自然语言处理经验方法会议（EMNLP'17）论文集。2017年9月。\n  - Piji Li, Lidong Bing, Wai Lam。[读者感知的多文档摘要：增强模型和首个数据集](http:\u002F\u002Fwww1.se.cuhk.edu.hk\u002F~textmine\u002Fdataset\u002Fra-mds\u002F)。 EMNLP 2017摘要新前沿研讨会（EMNLP-NewSum'17）论文集。2017年9月。\n  - Tan, Jiwei, Xiaojun Wan 和 Jianguo Xiao。\"[从神经句子摘要到标题生成：一种由粗到精的方法](http:\u002F\u002Fstatic.ijcai.org\u002Fproceedings-2017\u002F0574.pdf)。\" IJCAI 2017。\n  - Ling, Jeffrey 和 Alexander M. Rush。\"[用于文档摘要的由粗到精注意力模型](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FW\u002FW17\u002FW17-4505.pdf)。\" EMNLP 2017 (2017年)：33。\n  - Ziqiang Cao, Furu Wei, Wenjie Li, Sujian Li。\"[忠于原文：事实感知的神经抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.04434)。\" arXiv:1711.04434 (2017年)。\n  - Angela Fan, David Grangier, Michael Auli。\"[可控的抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05217)。\" arXiv:1711.05217 (2017年)。\n  - Liu, Linqing, Yao Lu, Min Yang, Qiang Qu, Jia Zhu 和 Hongyan Li。\"[用于抽象文本摘要的生成对抗网络](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1711.09357.pdf)。\" arXiv预印本 arXiv:1711.09357 (2017年)。\n  - Narayan, Shashi, Shay B. Cohen 和 Mirella Lapata。\"[使用强化学习对句子进行排序以进行抽取式摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.08636)。\" arXiv预印本 arXiv:1802.08636 (2018年)。\n  - Asli Celikyilmaz, Antoine Bosselut, Xiaodong He, Yejin Choi。\"[用于抽象摘要的深度通信代理](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.10357)。\" NAACL (2018年)。\n  - Chen, Wenhu, Guanlin Li, Shuo Ren, Shujie Liu, Zhirui Zhang, Mu Li 和 Ming Zhou。\"[神经序列预测中的生成桥接网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.09152)。\" NAACL (2018年)。\n  - Li, Piji, Lidong Bing 和 Wai Lam。\"[用于抽象摘要的基于演员-评论员的训练框架](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.11070)。\" arXiv预印本 arXiv:1803.11070 (2018年)。\n  - Arman Cohan, Franck Dernoncourt, Doo Soon Kim, Trung Bui, Seokhwan Kim, Walter Chang, Nazli Goharian。\"[用于长文档抽象摘要的语篇感知注意力模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.05685)\"。 NAACL，2018年。\n  - Yuxiang Wu, Baotian Hu。\"[通过深度强化学习提取连贯摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.07036)。\" AAAI (2018年)。\n  - Jianmin Zhang, Jiwei Tan, Xiaojun Wan。\"[迈向基于神经网络的多文档抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09010)。\" arXiv:1804.09010 (2018年)。\n  - Li Wang, Junlin Yao, Yunzhe Tao, Li Zhong, Wei Liu, Qiang Du。\"[用于抽象文本摘要的强化主题感知卷积序列到序列模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.03616)。\" IJCAI-ECAI (2018年)。\n  - Yen-Chun Chen, Mohit Bansal。\"[使用强化选择句子重写的快速抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.11080)。\" arXiv:1805.11080 (2018年)。\n  - Song, Kaiqiang, Lin Zhao 和 Fei Liu。\"[用于抽象摘要的结构注入复制机制](http:\u002F\u002Fwww.cs.ucf.edu\u002F~feiliu\u002Fpapers\u002FCOLING2018_StructSumm.pdf)。\" COLING，2018年。\n  - Keneshloo, Yaser, Tian Shi, Chandan K. Reddy 和 Naren Ramakrishnan。\"[用于序列到序列模型的深度强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.09461)。\" arXiv预印本 arXiv:1805.09461 (2018年)。\n  - Qingyu Zhou, Nan Yang, Furu Wei, Ming Zhou。\"[顺序复制网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02301)。\" AAAI (2018年)。\n  - Qingyu Zhou, Nan Yang, Furu Wei, Shaohan Huang, Ming Zhou, Tiejun Zhao。\"[通过联合学习评分和选择句子进行神经文档摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.02305)。\" ACL (2018年)。\n  - Lin, Junyang, Xu Sun, Shuming Ma 和 Qi Su。\"[用于抽象摘要的全局编码](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.03989)。\" arXiv预印本 arXiv:1805.03989 (2018年)。\n  - Khatri, Chandra, Gyanit Singh 和 Nish Parikh。\"[使用文档上下文向量和循环神经网络进行抽象和抽取文本摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.08000)。\" arXiv预印本 arXiv:1807.08000 (2018年)。\n  - Hsu, Wan-Ting, Chieh-Kai Lin, Ming-Ying Lee, Kerui Min, Jing Tang 和 Min Sun。\"[使用不一致性损失统一抽取式和抽象式摘要的模型](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.06266)。\" arXiv预印本 arXiv:1805.06266 (2018年)。\n  - Sun, Fei, Peng Jiang, Hanxiao Sun, Changhua Pei, Wenwu Ou 和 Xiaobo Wang。\"[用于产品标题摘要的多源指针网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.06885)。\" arXiv预印本 arXiv:1808.06885 (2018年)。\n  - Wojciech Kryściński, Romain Paulus, Caiming Xiong, Richard Socher。\"[改进文本摘要中的抽象性](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07913)。\" arXiv预印本 arXiv:1808.07913 (2018年)。\n  - Zhang, Xingxing, Mirella Lapata, Furu Wei 和 Ming Zhou。\"[神经潜在抽取式文档摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.07187)。\" arXiv预印本 arXiv:1808.07187 (2018年)。\n  - Sebastian Gehrmann, Yuntian Deng, Alexander M. Rush。\"[自底向上的抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.10792)。\" arXiv预印本 arXiv:1808.10792 (2018年)。\n  - Yichen Jiang, Mohit Bansal。\"[封闭式训练以改进摘要编码器记忆](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.04585)。\" arXiv预印本 arXiv:1809.04585 (2018年)。\n  - Kamal Al-Sabahi, Zhang Zuping, Yang Kang。\"[用于抽象摘要的双向注意力编码器-解码器模型和双向束搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.06662)。\" arXiv预印本 arXiv:1809.06662 (2018年)。\n  - Raphael Schumann。\"[使用长度控制变分自编码器的无监督抽象句子摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1809.05233)。\" arXiv预印本 arXiv:1809.05233 (2018年)。\n  - Krishna, Kundan 和 Balaji Vasan Srinivasan。\"[使用神经注意力生成面向主题的摘要](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FN18-1153)。\" NAACL 2018。\n  - Lisa Fan, Dong Yu, Lu Wang。\"[鲁棒的神经抽象摘要系统及对抗信息评估](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06065)。\" arXiv预印本 arXiv:1810.06065 (2018年)。\n  - Eric Chu, Peter J. Liu。\"[无监督神经多文档抽象摘要](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05739)。\" arXiv预印本 arXiv:1810.05739 (2018年)。\n  - Yaser Keneshloo, Naren Ramakrishnan, Chandan K. Reddy。\"[用于文本摘要的深度迁移强化学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.06667)。\" arXiv预印本 arXiv:1810.06667 (2018年)。\n  - Mahnaz Koupaee, William Yang Wang。\"[WikiHow: 一个大规模文本摘要数据集](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.09305)。\" arXiv预印本 arXiv:1810.09305 (2018年)。\n  - Li Dong, Nan Yang, Wenhui Wang, Furu Wei, Xiaodong Liu, Yu Wang, Jianfeng Gao, Ming Zhou, Hsiao-Wuen Hon。\"[用于自然语言理解和生成的统一语言模型预训练](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.03197)。\" arXiv预印本 arXiv:1905.03197 (2019年)。\n\n### 观点摘要\n  - Wu, Haibing, Yiwei Gu, Shangdi Sun, and Xiaodong Gu. \"[基于方面的观点摘要与卷积神经网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.09128).\" arXiv preprint arXiv:1511.09128 (2015).\n  - Irsoy, Ozan, and Claire Cardie. \"[使用深度循环神经网络进行观点挖掘](http:\u002F\u002Fanthology.aclweb.org\u002FD\u002FD14\u002FD14-1080.pdf).\" In EMNLP, pp. 720-728. 2014.\n  - Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, Wai Lam. \"[用于推荐的神经评分回归与抽象提示生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00154).\". In SIGIR, 2017.\n  \n### 视频摘要\n  - Zhou, Kaiyang, and Yu Qiao. \"[用于无监督视频摘要的深度强化学习与多样性-代表性奖励](https:\u002F\u002Farxiv.org\u002Fabs\u002F1801.00054).\" arXiv preprint arXiv:1801.00054 (2017). \n  - Mahasseni, Behrooz, Michael Lam, and Sinisa Todorovic. \"[使用对抗性LSTM网络进行无监督视频摘要](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2017\u002Fpapers\u002FMahasseni_Unsupervised_Video_Summarization_CVPR_2017_paper.pdf).\" In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR). 2017.\n\n### 阅读理解\n - Hermann, Karl Moritz, Tomas Kocisky, Edward Grefenstette, Lasse Espeholt, Will Kay, Mustafa Suleyman, and Phil Blunsom. \"[教机器阅读和理解](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5945-teaching-machines-to-read-and-comprehend).\" In Advances in Neural Information Processing Systems, pp. 1693-1701. 2015.\n - Hill, Felix, Antoine Bordes, Sumit Chopra, and Jason Weston. \"[金发姑娘原则：使用显式记忆表示阅读儿童书籍](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.02301).\" arXiv preprint arXiv:1511.02301 (2015).\n - Kadlec, Rudolf, Martin Schmid, Ondrej Bajgar, and Jan Kleindienst. \"[使用注意力求和阅读器网络进行文本理解](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01547).\" arXiv preprint arXiv:1603.01547 (2016).\n - Chen, Danqi, Jason Bolton, and Christopher D. Manning. \"[对CNN\u002F每日邮报阅读理解任务的全面考察](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02858).\" arXiv preprint arXiv:1606.02858 (2016).\n - Dhingra, Bhuwan, Hanxiao Liu, William W. Cohen, and Ruslan Salakhutdinov. \"[用于文本理解的门控注意力阅读器](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01549).\" arXiv preprint arXiv:1606.01549 (2016).\n - Sordoni, Alessandro, Phillip Bachman, and Yoshua Bengio. \"[用于机器阅读的迭代交替神经注意力](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02245).\" arXiv preprint arXiv:1606.02245 (2016).\n - Trischler, Adam, Zheng Ye, Xingdi Yuan, and Kaheer Suleman. \"[使用EpiReader进行自然语言理解](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02270).\" arXiv preprint arXiv:1606.02270 (2016).\n - Yiming Cui, Zhipeng Chen, Si Wei, Shijin Wang, Ting Liu, Guoping Hu. \"[用于阅读理解的注意力之上的注意力神经网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.04423).\" arXiv preprint arXiv:1607.04423 (2016).\n - Yiming Cui, Ting Liu, Zhipeng Chen, Shijin Wang, Guoping Hu. \"[基于共识注意力的中文阅读理解神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.02250).\" arXiv preprint arXiv:1607.02250 (2016).\n - Daniel Hewlett, Alexandre Lacoste, Llion Jones, Illia Polosukhin, Andrew Fandrianto, Jay Han, Matthew Kelcey and David Berthelot. \"[维基阅读：一个基于维基百科的新型大规模语言理解任务](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FP\u002FP16\u002FP16-1145.pdf).\" ACL (2016). pp. 1535-1545.\n  - Minghao Hu, Yuxing Peng, Xipeng Qiu. \"[用于机器理解的记忆阅读器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.02798).\" arXiv:1705.02798 (2017).\n  - Wenhui Wang, Nan Yang, Furu Wei, Baobao Chang and Ming Zhou. \"[R-NET：使用自匹配网络的机器阅读理解](https:\u002F\u002Fwww.microsoft.com\u002Fen-us\u002Fresearch\u002Fpublication\u002Fmcr\u002F).\" ACL (2017).\n  \n\n### 句子建模\n  - Kalchbrenner, Nal, Edward Grefenstette, and Phil Blunsom. \"[用于句子建模的卷积神经网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1404.2188).\" arXiv preprint arXiv:1404.2188 (2014).\n  - Kim, Yoon. \"[用于句子分类的卷积神经网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1408.5882).\" arXiv preprint arXiv:1408.5882 (2014).\n  - Le, Quoc V., and Tomas Mikolov. \"[句子和文档的分布式表示](http:\u002F\u002Farxiv.org\u002Fabs\u002F1405.4053).\" arXiv preprint arXiv:1405.4053 (2014).\n  - Yang, Zichao, Diyi Yang, Chris Dyer, Xiaodong He, Alex Smola, and Eduard Hovy. \"[用于文档分类的分层注意力网络](http:\u002F\u002Fwww.cs.cmu.edu\u002F~diyiy\u002Fdocs\u002Fnaacl16.pdf).\" In Proceedings of the 2016 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies. 2016.\n\n### 推理\n  - Peng, Baolin, Zhengdong Lu, Hang Li, and Kam-Fai Wong. \"[迈向基于神经网络的推理](http:\u002F\u002Farxiv.org\u002Fabs\u002F1508.05508).\" arXiv preprint arXiv:1508.05508 (2015).\n  \n### 知识引擎\n - Bordes, Antoine, Nicolas Usunier, Alberto Garcia-Duran, Jason Weston, and Oksana Yakhnenko. \"[用于建模多关系数据的嵌入翻译](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5071-translating-embeddings-for-modeling-multi-relational-data).\" In Advances in Neural Information Processing Systems, pp. 2787-2795. 2013. TransE\n - Lin, Yankai, Shiqi Shen, Zhiyuan Liu, Huanbo Luan, and Maosong Sun. \"[基于实例选择性注意力的神经关系抽取](http:\u002F\u002Fnlp.csai.tsinghua.edu.cn\u002F~lzy\u002Fpublications\u002Facl2016_nre.pdf).\" ACL (2016)\n - TransXXX\n\n### 记忆网络\n - Graves, Alex, Greg Wayne, and Ivo Danihelka. \"[神经图灵机](http:\u002F\u002Farxiv.org\u002Fabs\u002F1410.5401).\" arXiv preprint arXiv:1410.5401 (2014).\n - Weston, Jason, Sumit Chopra, and Antoine Bordes. \"[记忆网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1410.3916).\" ICLR (2014).\n - Sukhbaatar, Sainbayar, Jason Weston, and Rob Fergus. \"[端到端记忆网络](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F5846-end-to-end-memory-networks).\" In Advances in neural information processing systems, pp. 2440-2448. 2015.\n - Weston, Jason, Antoine Bordes, Sumit Chopra, Alexander M. Rush, Bart van Merriënboer, Armand Joulin, and Tomas Mikolov. \"[迈向AI完备的问答：一套先决条件的玩具任务](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.05698).\" arXiv preprint arXiv:1502.05698 (2015).\n - Bordes, Antoine, Nicolas Usunier, Sumit Chopra, and Jason Weston. \"[基于记忆网络的大规模简单问答](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02075).\" arXiv preprint arXiv:1506.02075 (2015).\n - Kumar, Ankit, Ozan Irsoy, Jonathan Su, James Bradbury, Robert English, Brian Pierce, Peter Ondruska, Ishaan Gulrajani, and Richard Socher. \"[任意提问：用于自然语言处理的动态记忆网络](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.07285).\" arXiv preprint arXiv:1506.07285 (2015).\n - Dodge, Jesse, Andreea Gane, Xiang Zhang, Antoine Bordes, Sumit Chopra, Alexander Miller, Arthur Szlam, and Jason Weston. \"[评估端到端对话系统学习的先决条件质量](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06931).\" arXiv preprint arXiv:1511.06931 (2015).\n - Hill, Felix, Antoine Bordes, Sumit Chopra, and Jason Weston. \"[金发姑娘原则：使用显式记忆表征阅读儿童书籍](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.02301).\" arXiv preprint arXiv:1511.02301 (2015).\n - Weston, Jason. \"[基于对话的语言学习](http:\u002F\u002Farxiv.org\u002Fabs\u002F1604.06045).\" arXiv preprint arXiv:1604.06045 (2016).\n - Bordes, Antoine, and Jason Weston. \"[学习端到端目标导向对话](http:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07683).\" arXiv preprint arXiv:1605.07683 (2016).\n - Chandar, Sarath, Sungjin Ahn, Hugo Larochelle, Pascal Vincent, Gerald Tesauro, and Yoshua Bengio. \"[分层记忆网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07427).\" arXiv preprint arXiv:1605.07427 (2016).\n - Jason Weston.\"[用于语言理解的记忆网络](http:\u002F\u002Fwww.thespermwhale.com\u002Fjaseweston\u002Ficml2016\u002F).\" ICML Tutorial 2016\n - Tang, Yaohua, Fandong Meng, Zhengdong Lu, Hang Li, and Philip LH Yu. \"[使用外部短语记忆的神经机器翻译](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.01792).\" arXiv preprint arXiv:1606.01792 (2016).\n - Wang, Mingxuan, Zhengdong Lu, Hang Li, and Qun Liu. \"[用于神经机器翻译的记忆增强解码器](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.02003).\" arXiv preprint arXiv:1606.02003 (2016).\n - Xiong, Caiming, Stephen Merity, and Richard Socher. \"[用于视觉和文本问答的动态记忆网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.01417).\" arXiv preprint arXiv:1603.01417 (2016).\n\n### 神经网络结构\n - Srivastava, Rupesh Kumar, Klaus Greff, and Jürgen Schmidhuber. \"[Highway networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1505.00387).\" arXiv preprint arXiv:1505.00387 (2015).\n - Srivastava, Rupesh K., Klaus Greff, and Jürgen Schmidhuber. \"[Training very deep networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1507.06228).\" In Advances in Neural Information Processing Systems, pp. 2368-2376. 2015.\n - Vinyals, Oriol, Meire Fortunato, and Navdeep Jaitly. \"[Pointer networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03134).\" In Advances in Neural Information Processing Systems, pp. 2692-2700. 2015.\n - Rasmus, Antti, Mathias Berglund, Mikko Honkala, Harri Valpola, and Tapani Raiko. \"[Semi-supervised learning with ladder networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1507.02672).\" In Advances in Neural Information Processing Systems, pp. 3546-3554. 2015.\n - Bengio, Samy, Oriol Vinyals, Navdeep Jaitly, and Noam Shazeer. \"[Scheduled sampling for sequence prediction with recurrent neural networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.03099).\" In Advances in Neural Information Processing Systems, pp. 1171-1179. 2015.\n - He, Kaiming, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. \"[Deep Residual Learning for Image Recognition](http:\u002F\u002Farxiv.org\u002Fabs\u002F1512.03385).\" arXiv preprint arXiv:1512.03385 (2015).\n - He, Kaiming. \"[Tutorial: Deep\tResidual\tNetworks: Deep\tLearning\tGets\tWay Deeper](http:\u002F\u002Ficml.cc\u002F2016\u002Ftutorials\u002Ficml2016_tutorial_deep_residual_networks_kaiminghe.pdf).\" ICML\t2016\ttutorial.\n - Courbariaux, Matthieu, and Yoshua Bengio. \"[Binarynet: Training deep neural networks with weights and activations constrained to+ 1 or-1](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.02830).\" arXiv preprint arXiv:1602.02830 (2016).\n - Jiatao Gu, Zhengdong Lu, Hang Li, Victor O.K. Li. \"[Incorporating Copying Mechanism in Sequence-to-Sequence Learning](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06393).\" ACL (2016)\n - Gulcehre, Caglar, Sungjin Ahn, Ramesh Nallapati, Bowen Zhou, and Yoshua Bengio. \"[Pointing the Unknown Words](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08148).\" arXiv preprint arXiv:1603.08148 (2016).\n - Andreas, Jacob, Marcus Rohrbach, Trevor Darrell, and Dan Klein. \"[Learning to compose neural networks for question answering](http:\u002F\u002Farxiv.org\u002Fabs\u002F1601.01705).\" NAACL 2016.\n - Julian Georg Zilly, Rupesh Kumar Srivastava, Jan Koutník, Jürgen Schmidhuber. \"[Recurrent Highway Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1607.03474).\" arXiv preprint  arXiv:1607.03474 (2016).\n - Zhilin Yang, Ye Yuan, Yuexin Wu, Ruslan Salakhutdinov, William W. Cohen. \"[Review Networks for Caption Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1605.07912).\" arXiv preprint  arXiv:1605.07912 (2016).\n - Xiang Li, Tao Qin, Jian Yang, Tie-Yan Liu. \"[LightRNN: Memory and Computation-Efficient Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09893).\" arXiv preprint  arXiv:1610.09893 (2016).\n - Zhaopeng Tu, Yang Liu, Lifeng Shang, Xiaohua Liu, Hang Li. \"[Neural Machine Translation with Reconstruction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.01874).\" arXiv preprint  arXiv:1611.01874 (2016).\n - Yingce Xia, Di He, Tao Qin, Liwei Wang, Nenghai Yu, Tie-Yan Liu, Wei-Ying Ma. \"[Dual Learning for Machine Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.00179).\" arXiv preprint  arXiv:1611.00179 (2016).\n  - Bahdanau, Dzmitry, Philemon Brakel, Kelvin Xu, Anirudh Goyal, Ryan Lowe, Joelle Pineau, Aaron Courville, and Yoshua Bengio. \"[An actor-critic algorithm for sequence prediction](https:\u002F\u002Farxiv.org\u002Fabs\u002F1607.07086).\" arXiv preprint arXiv:1607.07086 (2016).\n - Kannan, Anjuli, and Oriol Vinyals. \"[Adversarial evaluation of dialogue models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.08198).\" arXiv preprint arXiv:1701.08198 (2017).\n - Kawthekar, Prasad, Raunaq Rewari, and Suvrat Bhooshan. \"[Evaluating Generative Models for Text Generation](https:\u002F\u002Fweb.stanford.edu\u002Fclass\u002Fcs224n\u002Freports\u002F2737434.pdf).\"\n - Li, Jiwei, Will Monroe, Tianlin Shi, Alan Ritter, and Dan Jurafsky. \"[Adversarial Learning for Neural Dialogue Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1701.06547).\" arXiv preprint arXiv:1701.06547 (2017).\n - Yang, Zhen, Wei Chen, Feng Wang, and Bo Xu. \"[Improving Neural Machine Translation with Conditional Sequence Generative Adversarial Nets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.04887).\" arXiv preprint arXiv:1703.04887 (2017).\n - Lijun Wu, Yingce Xia, Li Zhao, Fei Tian, Tao Qin, Jianhuang Lai, Tie-Yan Liu. \"[Adversarial Neural Machine Translation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.06933).\" IJCAI (2017).\n - Liu, Pengfei, Xipeng Qiu, and Xuanjing Huang. \"[Adversarial Multi-task Learning for Text Classification](https:\u002F\u002Farxiv.org\u002Fabs\u002F1704.05742).\" arXiv preprint arXiv:1704.05742 (2017).\n - Jonas Gehring, Michael Auli, David Grangier, Denis Yarats, Yann N. Dauphin. \"[Convolutional Sequence to Sequence Learning (https:\u002F\u002Farxiv.org\u002Fabs\u002F1705.03122).\"  arXiv:1705.03122 (2017).\n - Lamb, Alex M., Anirudh Goyal ALIAS PARTH GOYAL, Ying Zhang, Saizheng Zhang, Aaron C. Courville, and Yoshua Bengio. \"[Professor forcing: A new algorithm for training recurrent networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09038).\" In Advances In Neural Information Processing Systems, pp. 4601-4609. 2016.\n - Rezende, Danilo Jimenez, Shakir Mohamed, and Daan Wierstra. \"[Stochastic backpropagation and approximate inference in deep generative models](http:\u002F\u002Farxiv.org\u002Fabs\u002F1401.4082).\" arXiv preprint arXiv:1401.4082 (2014).\n - Kingma, Diederik P., and Max Welling. \"[Auto-encoding variational bayes](http:\u002F\u002Farxiv.org\u002Fabs\u002F1312.6114).\" arXiv preprint arXiv:1312.6114 (2013).\n - Fabius, Otto, and Joost R. van Amersfoort. \"[Variational recurrent auto-encoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1412.6581).\" arXiv preprint arXiv:1412.6581 (2014).\n - Bayer, Justin, and Christian Osendorfer. \"[Learning stochastic recurrent networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1411.7610).\" arXiv preprint arXiv:1411.7610 (2014).\n - Bowman, Samuel R., Luke Vilnis, Oriol Vinyals, Andrew M. Dai, Rafal Jozefowicz, and Samy Bengio. \"[Generating sentences from a continuous space](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06349).\" arXiv preprint arXiv:1511.06349 (2015).\n - Gregor, Karol, Ivo Danihelka, Alex Graves, Danilo Jimenez Rezende, and Daan Wierstra. \"[DRAW: A recurrent neural network for image generation](http:\u002F\u002Farxiv.org\u002Fabs\u002F1502.04623).\" arXiv preprint arXiv:1502.04623 (2015).\n - Makhzani, Alireza, Jonathon Shlens, Navdeep Jaitly, and Ian Goodfellow. \"[Adversarial autoencoders](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05644).\" arXiv preprint arXiv:1511.05644 (2015).\n - Johnson, Matthew J., David Duvenaud, Alexander B. Wiltschko, Sandeep R. Datta, and Ryan P. Adams. \"[Composing graphical models with neural networks for structured representations and fast inference](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.06277).\" arXiv preprint arXiv:1603.06277 (2016).\n - Doersch, Carl. \"[Tutorial on Variational Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.05908).\" arXiv preprint arXiv:1606.05908 (2016).\n - Chung, Junyoung, Kyle Kastner, Laurent Dinh, Kratarth Goel, Aaron C. Courville, and Yoshua Bengio. \"[A recurrent latent variable model for sequential data](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02216).\" In Advances in neural information processing systems, pp. 2980-2988. 2015.\n - Eslami, S. M., Nicolas Heess, Theophane Weber, Yuval Tassa, Koray Kavukcuoglu, and Geoffrey E. Hinton. \"[Attend, Infer, Repeat: Fast Scene Understanding with Generative Models](https:\u002F\u002Farxiv.org\u002Fabs\u002F1603.08575).\" arXiv preprint arXiv:1603.08575 (2016).\n - Shengjia Zhao, Jiaming Song, Stefano Ermon. \"[InfoVAE: Information Maximizing Variational Autoencoders](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.02262).\" arXiv:1706.02262 (2017).\n - Goodfellow, Ian, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, and Yoshua Bengio. \"[Generative adversarial nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1406.2661).\" In Advances in Neural Information Processing Systems, pp. 2672-2680. 2014\n - Radford, Alec, Luke Metz, and Soumith Chintala. \"[Unsupervised representation learning with deep convolutional generative adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434).\" arXiv preprint arXiv:1511.06434 (2015).\n - Denton, Emily L., Soumith Chintala, and Rob Fergus. \"[Deep Generative Image Models using a￼ Laplacian Pyramid of Adversarial Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1506.05751).\" In Advances in neural information processing systems, pp. 1486-1494. 2015.\n - Dosovitskiy, Alexey, Jost Tobias Springenberg, and Thomas Brox. \"[Learning to generate chairs with convolutional neural networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1411.5928).\" In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 1538-1546. 2015.\n - Mathieu, Michael, Camille Couprie, and Yann LeCun. \"[Deep multi-scale video prediction beyond mean square error](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.05440).\" arXiv preprint arXiv:1511.05440 (2015).\n - Salimans, Tim, Ian Goodfellow, Wojciech Zaremba, Vicki Cheung, Alec Radford, and Xi Chen. \"[Improved Techniques for Training GANs](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03498).\" arXiv preprint arXiv:1606.03498 (2016).\n - Chen, Xi, Yan Duan, Rein Houthooft, John Schulman, Ilya Sutskever, and Pieter Abbeel. \"[InfoGAN: Interpretable Representation Learning by Information Maximizing Generative Adversarial Nets](http:\u002F\u002Farxiv.org\u002Fabs\u002F1606.03657).\" arXiv preprint arXiv:1606.03657 (2016).\n - Im, Daniel Jiwoong, Chris Dongjoo Kim, Hui Jiang, and Roland Memisevic. \"[Generating images with recurrent adversarial networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1602.05110).\" arXiv preprint arXiv:1602.05110 (2016).\n - Yu, Lantao, Weinan Zhang, Jun Wang, and Yong Yu. \"[SeqGAN: Sequence Generative Adversarial Nets with Policy Gradient](http:\u002F\u002Farxiv.org\u002Fabs\u002F1609.05473).\" arXiv preprint arXiv:1609.05473 (2016).\n - Augustus Odena, Christopher Olah, Jonathon Shlens. \"[Conditional Image Synthesis With Auxiliary Classifier GANs](https:\u002F\u002Farxiv.org\u002Fabs\u002F1610.09585).\" arXiv preprint arXiv:1610.09585 (2016).\n - Ian Goodfellow. \"[NIPS Tutorial: GANs](http:\u002F\u002Fwww.iangoodfellow.com\u002Fslides\u002F2016-12-04-NIPS.pdf)\", NIPS, 2016\n - Che, Tong, Yanran Li, Ruixiang Zhang, R. Devon Hjelm, Wenjie Li, Yangqiu Song, and Yoshua Bengio. \"[Maximum-Likelihood Augmented Discrete Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.07983).\" arXiv preprint arXiv:1702.07983 (2017).\n - Junbo (Jake) Zhao, Yoon Kim, Kelly Zhang, Alexander M. Rush, Yann LeCun. \"[Adversarially Regularized Autoencoders for Generating Discrete Structures](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04223).\" arXiv preprint arXiv:1706.04223 (2017).\n - \tMike Lewis  \tDenis Yarats  \tYann N. Dauphin  \tDevi Parikh  \tDhruv Batra . \"[ Deal or No Deal? End-to-End Learning for Negotiation Dialogues](http:\u002F\u002Fs3.amazonaws.com\u002Fend-to-end-negotiator\u002Fend-to-end-negotiator.pdf).\" (2017).\n - Mihaela Rosca, Balaji Lakshminarayanan, David Warde-Farley, Shakir Mohamed. \"[Variational Approaches for Auto-Encoding Generative Adversarial Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1706.04987).\" arXiv preprint arXiv:1706.04987 (2017).\n - Goyal, Prasoon, Zhiting Hu, Xiaodan Liang, Chenyu Wang, and Eric Xing. \"[Nonparametric Variational Auto-encoders for Hierarchical Representation Learning](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1703.07027.pdf).\" arXiv preprint arXiv:1703.07027 (2017).\n - Sabour, Sara, Nicholas Frosst, and Geoffrey Hinton. \"[Dynamic Routing between Capsules](https:\u002F\u002Farxiv.org\u002Fabs\u002F1710.09829).\" (2017).\n - Vaswani, Ashish, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. \"[Attention is all you need](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F7181-attention-is-all-you-need).\" NIPS. 2017.\n \n#### 架构搜索\n- Frankle, Jonathan, and Michael Carbin. \"The lottery ticket hypothesis: Finding sparse, trainable neural networks.\" arXiv preprint arXiv:1803.03635 (2018).\n- Xie, Saining, Alexander Kirillov, Ross Girshick, and Kaiming He. \"Exploring Randomly Wired Neural Networks for Image Recognition.\" arXiv preprint arXiv:1904.01569 (2019).\n- So, David R., Chen Liang, and Quoc V. Le. \"The Evolved Transformer.\" arXiv preprint arXiv:1901.11117 (2019).\n- Chenguang Wang, Mu Li, Alexander J. Smola. \"Language Models with Transformers.\" arXiv preprint arXiv:1904.09408 (2019).\n\n### 推荐系统\n- Salakhutdinov, Ruslan, Andriy Mnih, and Geoffrey Hinton. \"[Restricted Boltzmann machines for collaborative filtering](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=1273596).\" In Proceedings of the 24th international conference on Machine learning, pp. 791-798. ACM, 2007.\n- Wang, Hao, Xingjian Shi, and Dit-Yan Yeung. \"[Relational Stacked Denoising Autoencoder for Tag Recommendation](http:\u002F\u002Fwww.wanghao.in\u002Fpaper\u002FAAAI15_RSDAE.pdf).\" In AAAI, pp. 3052-3058. 2015.\n- Wang, Hao, Naiyan Wang, and Dit-Yan Yeung. \"[Collaborative deep learning for recommender systems](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2783273).\" In Proceedings of the 21th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, pp. 1235-1244. ACM, 2015.\n- Covington, Paul, Jay Adams, and Emre Sargin. \"[Deep neural networks for youtube recommendations](http:\u002F\u002Fdl.acm.org\u002Fcitation.cfm?id=2959190).\" In Proceedings of the 10th ACM Conference on Recommender Systems, pp. 191-198. ACM, 2016.\n- Devooght, Robin, and Hugues Bersini. \"[Collaborative Filtering with Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.07400).\" arXiv preprint arXiv:1608.07400 (2016).\n- Wang, Hao, S. H. I. Xingjian, and Dit-Yan Yeung. \"[Collaborative recurrent autoencoder: Recommend while learning to fill in the blanks](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F6163-collaborative-recurrent-autoencoder-recommend-while-learning-to-fill-in-the-blanks).\" In Advances in Neural Information Processing Systems, pp. 415-423. 2016.\n- Tang, Jian, Yifan Yang, Sam Carton, Ming Zhang, and Qiaozhu Mei. \"[Context-aware Natural Language Generation with Recurrent Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.09900).\" arXiv preprint arXiv:1611.09900 (2016).\n- Zhang, Fuzheng, Nicholas Jing Yuan, Defu Lian, Xing Xie, and Wei-Ying Ma. \"[Collaborative Knowledge Base Embedding for Recommender Systems](http:\u002F\u002Fwww.kdd.org\u002Fkdd2016\u002Fsubtopic\u002Fview\u002Fcollaborative-knowledge-base-embedding-for-recommender-systems).\" KDD, 2016. \n- Dong, Li, Shaohan Huang, Furu Wei, Mirella Lapata, Ming Zhou, and Ke XuΤ. \"[Learning to Generate Product Reviews from Attributes](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FE\u002FE17\u002FE17-1059.pdf).\" EACL, 2017.\n- He, Xiangnan. \"[Neural Collaborative Filtering](http:\u002F\u002Fwww.comp.nus.edu.sg\u002F~xiangnan\u002Fpapers\u002Fncf.pdf).\" WWW, 2017\n- Wu, Chao-Yuan, Amr Ahmed, Alex Beutel, Alexander J. Smola, and How Jing. \"[Recurrent Recommender Networks](http:\u002F\u002Falbeutel.com\u002Fpapers\u002Frrn_wsdm2017.pdf).\" Training 10, no. 2: 10-1.2017\n- Radford, Alec, Rafal Jozefowicz, and Ilya Sutskever. \"[Learning to generate reviews and discovering sentiment](https:\u002F\u002Farxiv.org\u002Fpdf\u002F1704.01444.pdf).\" arXiv preprint arXiv:1704.01444 (2017).\n- Piji Li, Zihao Wang, Zhaochun Ren, Lidong Bing, Wai Lam. \"[Neural Rating Regression with Abstractive Tips Generation for Recommendation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.00154).\". In SIGIR, pp xx-xx. 2017.\n\n### 网络表示学习\n - [网络表示学习（NRL）\u002F网络嵌入（NE）必读论文](https:\u002F\u002Fgithub.com\u002Fthunlp\u002FNRLPapers) \n\n### 音乐生成\n - [使用机器学习生成音乐](http:\u002F\u002Fwww.datasciencecentral.com\u002Fprofiles\u002Fblogs\u002Fusing-machine-learning-to-generate-music)\n \n### 计算生物学\n - [Awesome DeepBio](https:\u002F\u002Fgithub.com\u002Fgokceneraslan\u002Fawesome-deepbio) by Gökçen Eraslan\n\n### 围棋\n - Silver, David, Aja Huang, Chris J. Maddison, Arthur Guez, Laurent Sifre, George van den Driessche, Julian Schrittwieser et al. \"[Mastering the game of Go with deep neural networks and tree search](http:\u002F\u002Fwww.nature.com\u002Fnature\u002Fjournal\u002Fv529\u002Fn7587\u002Ffull\u002Fnature16961.html).\" Nature 529, no. 7587 (2016): 484-489.\n - Tian, Yuandong, and Yan Zhu. \"[Better Computer Go Player with Neural Network and Long-term Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06410).\" arXiv preprint arXiv:1511.06410 (2015).\n \n### 股票预测\n  - Xiao Ding, Yue Zhang, Ting Liu, Junwen Duan. \"Deep Learning for Event-Driven Stock Prediction\". IJCAI 2015.\n  - Si, Jianfeng, Arjun Mukherjee, Bing Liu, Sinno Jialin Pan, Qing Li, and Huayi Li. \"[Exploiting Social Relations and Sentiment for Stock Prediction](http:\u002F\u002Fwww.aclweb.org\u002Fanthology\u002FD14-1120).\" EMNLP 2014.\n  - Ding, Xiao, Yue Zhang, Ting Liu, and Junwen Duan. \"[Using Structured Events to Predict Stock Price Movement: An Empirical Investigation](http:\u002F\u002Fanthology.aclweb.org\u002FD\u002FD14\u002FD14-1148.pdf).\" EMNLP 2014.\n  - Bollen, Johan, Huina Mao, and Xiaojun Zeng. \"[Twitter mood predicts the stock market](http:\u002F\u002Farxiv.org\u002Fabs\u002F1010.3003).\" Journal of Computational Science 2, no. 1 (2011): 1-8.\n  - Hengjian Jia. \"[Investigation Into The Effectiveness Of Long Short Term Memory Networks For Stock Price Prediction](http:\u002F\u002Farxiv.org\u002Fabs\u002F1603.07893).\" arXiv:1603.07893. (2016)","# App-DL 快速上手指南\n\n## 环境准备\n\n### 系统要求\n- **操作系统**: Linux (推荐 Ubuntu 18.04+ 或 CentOS 7+)，macOS 10.14+，Windows 10\u002F11 (需配置 WSL 2)\n- **Python**: 3.7 或 3.8 版本\n- **CUDA** (GPU 用户): 10.2 或 11.x (与 PyTorch 版本匹配)\n\n### 前置依赖\n- Git\n- pip (Python 包管理器)\n- 可选: Conda 或 Miniconda (用于环境管理)\n\n## 安装步骤\n\n### 1. 克隆代码仓库\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Flipiji\u002FApp-DL.git\ncd App-DL\n```\n\n### 2. 创建 Python 虚拟环境 (推荐)\n使用 Conda:\n```bash\nconda create -n app-dl python=3.8\nconda activate app-dl\n```\n\n或使用 venv:\n```bash\npython -m venv venv\nsource venv\u002Fbin\u002Factivate  # Linux\u002FmacOS\n# 或 venv\\Scripts\\activate  # Windows\n```\n\n### 3. 安装依赖包\n使用国内镜像源加速下载:\n```bash\npip install -r requirements.txt -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n如果缺少 `requirements.txt` 文件，可安装核心依赖:\n```bash\npip install torch torchvision\npip install numpy pandas scikit-learn\npip install jupyter matplotlib\n```\n\n## 基本使用\n\n### 1. 运行示例脚本\n查看项目中的示例目录:\n```bash\nls examples\u002F\n```\n\n运行一个简单的深度学习示例:\n```bash\npython examples\u002Fbasic_demo.py\n```\n\n### 2. 使用 Jupyter Notebook\n启动 Jupyter 并打开教程笔记本:\n```bash\njupyter notebook\n```\n在浏览器中打开 `tutorials\u002F` 目录下的 `.ipynb` 文件。\n\n### 3. 基础模型训练\n运行文本生成示例:\n```bash\npython train.py --config configs\u002Ftext_generation.yaml\n```\n\n### 4. 测试安装\n创建测试脚本 `test_install.py`:\n```python\nimport torch\nimport numpy as np\nprint(\"PyTorch版本:\", torch.__version__)\nprint(\"CUDA可用:\", torch.cuda.is_available())\nprint(\"测试完成!\")\n```\n运行:\n```bash\npython test_install.py\n```","一家初创电商公司正在开发一个智能客服对话系统，旨在自动处理用户的售前咨询，例如产品推荐、库存查询和促销活动解答。开发团队由几名全栈工程师和一名数据科学家组成，他们希望利用深度学习技术提升对话系统的准确性和流畅度。\n\n### 没有 App-DL 时\n- **技术选型困难**：团队需要从海量的论文、博客和开源项目中筛选与任务型对话系统相关的技术资料，过程耗时且难以判断哪些是最前沿、最适合当前场景的方案。\n- **实现路径模糊**：确定了大致方向（如深度强化学习用于对话策略）后，缺乏具体的算法实现参考和代码示例，从理论到工程落地的鸿沟很大，试错成本高。\n- **知识体系零散**：团队成员收集的资料分散在各个书签、本地文档中，关于对话系统、文本生成和强化学习的知识无法有效串联，形成系统化的开发指导。\n- **跟进前沿滞后**：由于信息渠道有限，团队很难及时了解到该领域最新的研究成果（如更高效的探索策略、更稳定的训练方法），系统迭代速度慢。\n\n### 使用 App-DL 后\n- **快速精准定位资源**：团队通过 App-DL 中结构化的“Task-Oriented Dialogue”和“Deep Reinforcement Learning”分类，迅速找到了 Wen 等人关于端到端任务对话系统的经典论文，以及 Li 等人将深度强化学习应用于对话生成的实践，极大缩短了调研周期。\n- **获得清晰的实现蓝图**：App-DL 提供的论文链接和代码资源（如相关 GitHub 项目）为团队提供了从算法原理到模型架构的具体参考，特别是《Deep Dyna-Q: Integrating Planning for Task-Completion Dialogue Policy Learning》一文，为设计对话策略学习模块提供了直接思路。\n- **构建系统知识框架**：团队以 App-DL 的目录结构为知识地图，将对话行为分类、用户模拟器、序列到序列模型等关键知识点有机组织起来，形成了对智能客服系统技术栈的完整认知。\n- **同步最新技术动态**：通过 App-DL 收录的近年顶会论文（如 EMNLP 2018 关于人格建模的研究），团队能持续吸收前沿方法，例如引入注意力记忆网络来让客服对话更具个性化和一致性，保持技术方案的竞争力。\n\nApp-DL 通过其精心整理的前沿论文与资源索引，将 AI 开发者从无序的信息海洋中解放出来，为特定 AI 应用场景提供了从理论到实践的“高速导航”。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Flipiji_App-DL_163abdd3.png","lipiji","Piji Li","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Flipiji_3b4a9cca.jpg","\r\nNUAA NLP Group","NUAA",null,"pagelee.sd@gmail.com","pijili","lipiji.com","https:\u002F\u002Fgithub.com\u002Flipiji",802,463,"2026-03-03T01:49:17",5,"Linux, macOS, Windows","未说明",{"notes":93,"python":91,"dependencies":94},"这是一个深度学习研究工具集，主要用于强化学习、对话系统和文本生成等任务。建议使用 conda 或 venv 管理 Python 环境，安装深度学习框架（如 PyTorch 或 TensorFlow）及其对应 CUDA 版本以支持 GPU 加速。首次运行可能需要下载预训练模型，具体大小未说明。",[95,96,97,98,99,100,101,102,103,104],"torch","transformers","accelerate","numpy","pandas","scikit-learn","tensorflow","keras","gensim","spacy",[15,37],"2026-03-27T02:49:30.150509","2026-04-06T05:36:38.482028",[109],{"id":110,"question_zh":111,"answer_zh":112,"source_url":113},4003,"论文《Salience Estimation via Variational Auto-Encoders for Multi-Document Summarization》的链接无法访问怎么办？","这是因为AAAI会议论文集尚未准备就绪。请耐心等待官方发布或稍后再尝试访问。","https:\u002F\u002Fgithub.com\u002Flipiji\u002FApp-DL\u002Fissues\u002F1",[]]