[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-tensorlayer--TensorLayer":3,"tool-tensorlayer--TensorLayer":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":76,"owner_website":78,"owner_url":79,"languages":80,"stars":100,"forks":101,"last_commit_at":102,"license":103,"difficulty_score":32,"env_os":104,"env_gpu":105,"env_ram":104,"env_deps":106,"category_tags":112,"github_topics":113,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":129,"updated_at":130,"faqs":131,"releases":162},7913,"tensorlayer\u002FTensorLayer","TensorLayer","Deep Learning and Reinforcement Learning Library for Scientists and Engineers ","TensorLayer 是一款基于 TensorFlow 构建的深度学习与强化学习开源库，旨在帮助科研人员与工程师快速搭建高级人工智能模型。它通过提供丰富且可灵活定制的神经网络层，有效解决了传统开发中底层代码复杂、模型构建耗时以及不同硬件后端适配困难等痛点，让用户能将更多精力集中在算法创新而非工程实现上。\n\n这款工具特别适合从事 AI 算法研究的研究员、需要高效原型的软件工程师，以及希望深入理解深度学习机制的高校师生。其核心亮点在于兼具简洁性与高性能：既拥有易于上手的高层抽象接口，支持分钟级入门，又保留了足够的底层灵活性以满足专业需求。此外，TensorLayer 不仅荣获过 ACM 多媒体学会“最佳开源软件”奖，还构建了完善的强化学习生态（RL Zoo），并提供从理论教程到工业级应用的全套资源。值得一提的是，其新一代版本 TensorLayerX 已实现跨框架统一，支持 TensorFlow、PyTorch、MindSpore 等多种后端，并能无缝运行于 NVIDIA GPU 及华为昇腾等不同硬件之上，真正实现了“一次编写，多端运行”。","\u003Ca href=\"https:\u002F\u002Ftensorlayer.readthedocs.io\u002F\">\n    \u003Cdiv align=\"center\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13fab5d1e5ab.png\" width=\"50%\" height=\"30%\"\u002F>\n    \u003C\u002Fdiv>\n\u003C\u002Fa>\n\n\u003C!--- [![PyPI Version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftensorlayer.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftensorlayer) --->\n\u003C!--- ![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftensorlayer.svg)) --->\n\n![GitHub last commit (branch)](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Ftensorlayer\u002Ftensorlayer\u002Fmaster.svg)\n[![Supported TF Version](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-2.0.0%2B-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Freleases)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13d664e1afd7.png)](https:\u002F\u002Ftensorlayer.readthedocs.io\u002F)\n[![Build Status](https:\u002F\u002Ftravis-ci.org\u002Ftensorlayer\u002Ftensorlayer.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Ftensorlayer\u002Ftensorlayer)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_6d444397f929.png)](http:\u002F\u002Fpepy.tech\u002Fproject\u002Ftensorlayer)\n[![Downloads](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_9d58e572bf1c.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftensorlayer\u002Fweek)\n[![Docker Pulls](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ftensorlayer\u002Ftensorlayer.svg)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Ftensorlayer\u002F)\n[![Codacy Badge](https:\u002F\u002Fapi.codacy.com\u002Fproject\u002Fbadge\u002FGrade\u002Fd6b118784e25435498e7310745adb848)](https:\u002F\u002Fwww.codacy.com\u002Fapp\u002Ftensorlayer\u002Ftensorlayer)\n\n\u003C!---  [![CircleCI](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster.svg?style=svg)](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster) --->\n\n\u003C!---  [![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13d664e1afd7.png)](https:\u002F\u002Ftensorlayercn.readthedocs.io\u002F)\n\u003C!---  [![PyUP Updates](https:\u002F\u002Fpyup.io\u002Frepos\u002Fgithub\u002Ftensorlayer\u002Ftensorlayer\u002Fshield.svg)](https:\u002F\u002Fpyup.io\u002Frepos\u002Fgithub\u002Ftensorlayer\u002Ftensorlayer\u002F) --->\n\n# Please click [TensorLayerX](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayerx) 🔥🔥🔥\n\n[TensorLayer](https:\u002F\u002Ftensorlayer.readthedocs.io) is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extensive collection of customizable neural layers to build advanced AI models quickly, based on this, the community open-sourced mass [tutorials](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexamples\u002Freinforcement_learning\u002FREADME.md) and [applications](https:\u002F\u002Fgithub.com\u002Ftensorlayer). TensorLayer is awarded the 2017 Best Open Source Software by the [ACM Multimedia Society](https:\u002F\u002Ftwitter.com\u002FImperialDSI\u002Fstatus\u002F923928895325442049). \nThis project can also be found at [OpenI](https:\u002F\u002Fgit.openi.org.cn\u002FTensorLayer\u002Ftensorlayer3.0) and [Gitee](https:\u002F\u002Fgitee.com\u002Forganizations\u002FTensorLayer).\n\n# News\n\n- 🔥 [TensorLayerX](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayerx) is a Unified Deep Learning and Reinforcement Learning Framework for All Hardwares, Backends and OS. The current version supports TensorFlow, Pytorch, MindSpore, PaddlePaddle, OneFlow and Jittor as the backends, allowing users to run the code on different hardware like Nvidia-GPU and Huawei-Ascend.\n- TensorLayer is now in [OpenI](https:\u002F\u002Fgit.openi.org.cn\u002FTensorLayer\u002Ftensorlayer3.0)\n- Reinforcement Learning Zoo: [Low-level APIs](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexamples\u002Freinforcement_learning) for professional usage, [High-level APIs](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FRLzoo) for simple usage, and a corresponding [Springer textbook](http:\u002F\u002Fspringer.com\u002Fgp\u002Fbook\u002F9789811540943)\n- [Sipeed Maxi-EMC](https:\u002F\u002Fgithub.com\u002Fsipeed\u002FMaix-EMC): Run TensorLayer models on the **low-cost AI chip** (e.g., K210) (Alpha Version)\n\n\u003C!-- 🔥 [NNoM](https:\u002F\u002Fgithub.com\u002Fmajianjia\u002Fnnom): Run TensorLayer quantized models on the **MCU** (e.g., STM32) (Coming Soon) -->\n\n# Design Features\n\nTensorLayer is a new deep learning library designed with simplicity, flexibility and high-performance in mind.\n\n- ***Simplicity*** : TensorLayer has a high-level layer\u002Fmodel abstraction which is effortless to learn. You can learn how deep learning can benefit your AI tasks in minutes through the massive [examples](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fawesome-tensorlayer).\n- ***Flexibility*** : TensorLayer APIs are transparent and flexible, inspired by the emerging PyTorch library. Compared to the Keras abstraction, TensorLayer makes it much easier to build and train complex AI models.\n- ***Zero-cost Abstraction*** : Though simple to use, TensorLayer does not require you to make any compromise in the performance of TensorFlow (Check the following benchmark section for more details).\n\nTensorLayer stands at a unique spot in the TensorFlow wrappers. Other wrappers like Keras and TFLearn\nhide many powerful features of TensorFlow and provide little support for writing custom AI models. Inspired by PyTorch, TensorLayer APIs are simple, flexible and Pythonic,\nmaking it easy to learn while being flexible enough to cope with complex AI tasks.\nTensorLayer has a fast-growing community. It has been used by researchers and engineers all over the world, including those from  Peking University,\nImperial College London, UC Berkeley, Carnegie Mellon University, Stanford University, and companies like Google, Microsoft, Alibaba, Tencent, Xiaomi, and Bloomberg.\n\n# Multilingual Documents\n\nTensorLayer has extensive documentation for both beginners and professionals. The documentation is available in\nboth English and Chinese.\n\n[![English Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-english-blue.svg)](https:\u002F\u002Ftensorlayer.readthedocs.io\u002F)\n[![Chinese Documentation](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-%E4%B8%AD%E6%96%87-blue.svg)](https:\u002F\u002Ftensorlayercn.readthedocs.io\u002F)\n[![Chinese Book](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fbook-%E4%B8%AD%E6%96%87-blue.svg)](http:\u002F\u002Fwww.broadview.com.cn\u002Fbook\u002F5059\u002F)\n\nIf you want to try the experimental features on the the master branch, you can find the latest document\n[here](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002F).\n\n# Extensive Examples\n\nYou can find a large collection of examples that use TensorLayer in [here](examples\u002F) and the following space:\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fawesome-tensorlayer\u002Fblob\u002Fmaster\u002Freadme.md\" target=\"\\_blank\">\n\t\u003Cdiv align=\"center\">\n\t\t\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_deb2b945efde.png\" width=\"40%\"\u002F>\n\t\u003C\u002Fdiv>\n\u003C\u002Fa>\n\n# Getting Start\n\nTensorLayer 2.0 relies on TensorFlow, numpy, and others. To use GPUs, CUDA and cuDNN are required.\n\nInstall TensorFlow:\n\n```bash\npip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU (version 2.0 RC1)\npip3 install tensorflow # CPU version\n```\n\nInstall the stable release of TensorLayer:\n\n```bash\npip3 install tensorlayer\n```\n\nInstall the unstable development version of TensorLayer:\n\n```bash\npip3 install git+https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer.git\n```\n\nIf you want to install the additional dependencies, you can also run\n```bash\npip3 install --upgrade tensorlayer[all]              # all additional dependencies\npip3 install --upgrade tensorlayer[extra]            # only the `extra` dependencies\npip3 install --upgrade tensorlayer[contrib_loggers]  # only the `contrib_loggers` dependencies\n```\n\nIf you are TensorFlow 1.X users, you can use TensorLayer 1.11.0:\n\n```bash\n# For last stable version of TensorLayer 1.X\npip3 install --upgrade tensorlayer==1.11.0\n```\n\n\u003C!---\n## Using Docker\n\nThe [TensorLayer containers](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Ftensorlayer\u002F) are built on top of the official [TensorFlow containers](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorflow\u002Ftensorflow\u002F):\n\n### Containers with CPU support\n\n```bash\n# for CPU version and Python 2\ndocker pull tensorlayer\u002Ftensorlayer:latest\ndocker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest\n\n# for CPU version and Python 3\ndocker pull tensorlayer\u002Ftensorlayer:latest-py3\ndocker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-py3\n```\n\n### Containers with GPU support\n\nNVIDIA-Docker is required for these containers to work: [Project Link](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n\n```bash\n# for GPU version and Python 2\ndocker pull tensorlayer\u002Ftensorlayer:latest-gpu\nnvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-gpu\n\n# for GPU version and Python 3\ndocker pull tensorlayer\u002Ftensorlayer:latest-gpu-py3\nnvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-gpu-py3\n```\n--->\n\n# Performance Benchmark\n\nThe following table shows the training speeds of [VGG16](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fvery_deep\u002F) using TensorLayer and native TensorFlow on a TITAN Xp.\n\n|   Mode    |       Lib       |  Data Format  | Max GPU Memory Usage(MB)  |Max CPU Memory Usage(MB) | Avg CPU Memory Usage(MB) | Runtime (sec) |\n| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |\n| AutoGraph | TensorFlow 2.0  | channel last  | 11833 |      2161         |        2136         |      74       |\n|           | TensorLayer 2.0 | channel last  | 11833 |      2187         |        2169         |      76       |\n|   Graph   |      Keras      | channel last  | 8677 |      2580         |        2576         |      101       |\n|   Eager   | TensorFlow 2.0  | channel last  | 8723 |      2052         |        2024         |      97       |\n|           | TensorLayer 2.0 | channel last  | 8723 |      2010         |        2007         |      95       |\n\n# Getting Involved\n\nPlease read the [Contributor Guideline](CONTRIBUTING.md) before submitting your PRs.\n\nWe suggest users to report bugs using Github issues. Users can also discuss how to use TensorLayer in the following slack channel.\n\n\u003Cbr\u002F>\n\n\u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftensorlayer\u002Fshared_invite\u002FenQtODk1NTQ5NTY1OTM5LTQyMGZhN2UzZDBhM2I3YjYzZDBkNGExYzcyZDNmOGQzNmYzNjc3ZjE3MzhiMjlkMmNiMmM3Nzc4ZDY2YmNkMTY\" target=\"\\_blank\">\n\t\u003Cdiv align=\"center\">\n\t\t\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_3ea39b818553.png\" width=\"40%\"\u002F>\n\t\u003C\u002Fdiv>\n\u003C\u002Fa>\n\n\u003Cbr\u002F>\n\n# Citing TensorLayer\n\nIf you find TensorLayer useful for your project, please cite the following papers：\n\n```\n@article{tensorlayer2017,\n    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},\n    journal = {ACM Multimedia},\n    title   = {{TensorLayer: A Versatile Library for Efficient Deep Learning Development}},\n    url     = {http:\u002F\u002Ftensorlayer.org},\n    year    = {2017}\n}\n\n@inproceedings{tensorlayer2021,\n  title={Tensorlayer 3.0: A Deep Learning Library Compatible With Multiple Backends},\n  author={Lai, Cheng and Han, Jiarong and Dong, Hao},\n  booktitle={2021 IEEE International Conference on Multimedia \\& Expo Workshops (ICMEW)},\n  pages={1--3},\n  year={2021},\n  organization={IEEE}\n}\n```\n","\u003Ca href=\"https:\u002F\u002Ftensorlayer.readthedocs.io\u002F\">\n    \u003Cdiv align=\"center\">\n        \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13fab5d1e5ab.png\" width=\"50%\" height=\"30%\"\u002F>\n    \u003C\u002Fdiv>\n\u003C\u002Fa>\n\n\u003C!--- [![PyPI Version](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftensorlayer.svg)](https:\u002F\u002Fbadge.fury.io\u002Fpy\u002Ftensorlayer) --->\n\u003C!--- ![PyPI - Python Version](https:\u002F\u002Fimg.shields.io\u002Fpypi\u002Fpyversions\u002Ftensorlayer.svg)) --->\n\n![GitHub最后提交（分支）](https:\u002F\u002Fimg.shields.io\u002Fgithub\u002Flast-commit\u002Ftensorlayer\u002Ftensorlayer\u002Fmaster.svg)\n[![支持的TensorFlow版本](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002FTensorFlow-2.0.0%2B-brightgreen.svg)](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftensorflow\u002Freleases)\n[![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13d664e1afd7.png)](https:\u002F\u002Ftensorlayer.readthedocs.io\u002F)\n[![构建状态](https:\u002F\u002Ftravis-ci.org\u002Ftensorlayer\u002Ftensorlayer.svg?branch=master)](https:\u002F\u002Ftravis-ci.org\u002Ftensorlayer\u002Ftensorlayer)\n[![下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_6d444397f929.png)](http:\u002F\u002Fpepy.tech\u002Fproject\u002Ftensorlayer)\n[![每周下载量](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_9d58e572bf1c.png)](https:\u002F\u002Fpepy.tech\u002Fproject\u002Ftensorlayer\u002Fweek)\n[![Docker拉取次数](https:\u002F\u002Fimg.shields.io\u002Fdocker\u002Fpulls\u002Ftensorlayer\u002Ftensorlayer.svg)](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Ftensorlayer\u002F)\n[![Codacy Badge](https:\u002F\u002Fapi.codacy.com\u002Fproject\u002Fbadge\u002FGrade\u002Fd6b118784e25435498e7310745adb848)](https:\u002F\u002Fwww.codacy.com\u002Fapp\u002Ftensorlayer\u002Ftensorlayer)\n\n\u003C!---  [![CircleCI](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster.svg?style=svg)](https:\u002F\u002Fcircleci.com\u002Fgh\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster) --->\n\n\u003C!---  [![文档状态](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_13d664e1afd7.png)](https:\u002F\u002Ftensorlayercn.readthedocs.io\u002F)\n\u003C!---  [![PyUP Updates](https:\u002F\u002Fpyup.io\u002Frepos\u002Fgithub\u002Ftensorlayer\u002Ftensorlayer\u002Fshield.svg)](https:\u002F\u002Fpyup.io\u002Frepos\u002Fgithub\u002Ftensorlayer\u002Ftensorlayer\u002F) --->\n\n# 请点击[TensorLayerX](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayerx) 🔥🔥🔥\n\n[TensorLayer](https:\u002F\u002Ftensorlayer.readthedocs.io) 是一款基于 TensorFlow 的新型深度学习和强化学习库，专为研究人员和工程师设计。它提供了一系列可定制的神经网络层，帮助用户快速构建先进的 AI 模型。在此基础上，社区开源了大量的[教程](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexamples\u002Freinforcement_learning\u002FREADME.md)和[应用](https:\u002F\u002Fgithub.com\u002Ftensorlayer)。TensorLayer 曾荣获 ACM 多媒体协会颁发的 2017 年度最佳开源软件奖（[推文链接](https:\u002F\u002Ftwitter.com\u002FImperialDSI\u002Fstatus\u002F923928895325442049)）。该项目也可在 [OpenI](https:\u002F\u002Fgit.openi.org.cn\u002FTensorLayer\u002Ftensorlayer3.0) 和 [Gitee](https:\u002F\u002Fgitee.com\u002Forganizations\u002FTensorLayer) 上找到。\n\n# 新闻\n\n- 🔥 [TensorLayerX](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayerx) 是一个适用于所有硬件、后端和操作系统的统一深度学习与强化学习框架。当前版本支持 TensorFlow、PyTorch、MindSpore、PaddlePaddle、OneFlow 和 Jittor 等后端，允许用户在 Nvidia-GPU 和 Huawei-Ascend 等不同硬件上运行代码。\n- TensorLayer 现已在 [OpenI](https:\u002F\u002Fgit.openi.org.cn\u002FTensorLayer\u002Ftensorlayer3.0) 上发布。\n- 强化学习动物园：提供面向专业使用的[低级 API](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexamples\u002Freinforcement_learning)，以及面向简单使用的[高级 API](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FRLzoo)，并配套出版了[施普林格教材](http:\u002F\u002Fspringer.com\u002Fgp\u002Fbook\u002F9789811540943)。\n- [Sipeed Maxi-EMC](https:\u002F\u002Fgithub.com\u002Fsipeed\u002FMaix-EMC)：可在**低成本 AI 芯片**（如 K210）上运行 TensorLayer 模型（Alpha 版本）。\n\n\u003C!-- 🔥 [NNoM](https:\u002F\u002Fgithub.com\u002Fmajianjia\u002Fnnom)：可在**MCU**（如 STM32）上运行 TensorLayer 量化模型（即将推出） -->\n\n# 设计特点\n\nTensorLayer 是一款以简洁、灵活和高性能为核心设计理念的新式深度学习库。\n\n- ***简洁性***：TensorLayer 提供了易于掌握的高层层\u002F模型抽象。通过丰富的[示例](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fawesome-tensorlayer)，您可以在几分钟内了解深度学习如何助力您的 AI 任务。\n- ***灵活性***：受新兴的 PyTorch 库启发，TensorLayer 的 API 设计透明且灵活。与 Keras 抽象相比，TensorLayer 更容易用于构建和训练复杂的 AI 模型。\n- ***零成本抽象***：尽管使用简便，TensorLayer 并不会牺牲 TensorFlow 的性能（更多细节请参阅下方的基准测试部分）。\n\nTensorLayer 在 TensorFlow 封装库中占据着独特的位置。与其他封装库（如 Keras 和 TFLearn）不同，TensorLayer 并未隐藏 TensorFlow 的许多强大功能，反而为编写自定义 AI 模型提供了充分支持。受 PyTorch 启发，TensorLayer 的 API 简洁、灵活且符合 Python 风格，既易于学习，又能应对复杂的 AI 任务。\n\nTensorLayer 拥有一个快速发展的社区，已被全球各地的研究人员和工程师广泛使用，其中包括来自北京大学、伦敦帝国学院、加州大学伯克利分校、卡内基梅隆大学、斯坦福大学等高校，以及谷歌、微软、阿里巴巴、腾讯、小米、彭博社等公司的专业人士。\n\n# 多语言文档\n\nTensorLayer 为初学者和专业人士都提供了详尽的文档，支持英语和中文两种语言。\n\n[![英文文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-english-blue.svg)](https:\u002F\u002Ftensorlayer.readthedocs.io\u002F)\n[![中文文档](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fdocumentation-%E4%B8%AD%E6%96%87-blue.svg)](https:\u002F\u002Ftensorlayercn.readthedocs.io\u002F)\n[![中文书籍](https:\u002F\u002Fimg.shields.io\u002Fbadge\u002Fbook-%E4%B8%AD%E6%96%87-blue.svg)](http:\u002F\u002Fwww.broadview.com.cn\u002Fbook\u002F5059\u002F)\n\n如果您想尝试主分支上的实验性功能，可以在这里找到最新文档：\n\n[此处](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002F)。\n\n# 丰富的示例\n\n您可以在[这里](examples\u002F)以及以下空间中找到大量使用 TensorLayer 的示例：\n\n\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fawesome-tensorlayer\u002Fblob\u002Fmaster\u002Freadme.md\" target=\"\\_blank\">\n\t\u003Cdiv align=\"center\">\n\t\t\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_deb2b945efde.png\" width=\"40%\"\u002F>\n\t\u003C\u002Fdiv>\n\u003C\u002Fa>\n\n# 开始使用\n\nTensorLayer 2.0 依赖于 TensorFlow、numpy 等。若要使用 GPU，还需要安装 CUDA 和 cuDNN。\n\n安装 TensorFlow：\n\n```bash\npip3 install tensorflow-gpu==2.0.0-rc1 # TensorFlow GPU（版本 2.0 RC1）\npip3 install tensorflow # CPU 版本\n```\n\n安装 TensorLayer 的稳定版：\n\n```bash\npip3 install tensorlayer\n```\n\n安装 TensorLayer 的开发版：\n\n```bash\npip3 install git+https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer.git\n```\n\n如果需要安装额外的依赖项，还可以运行以下命令：\n```bash\npip3 install --upgrade tensorlayer[all]              # 所有额外依赖项\npip3 install --upgrade tensorlayer[extra]            # 仅 `extra` 依赖项\npip3 install --upgrade tensorlayer[contrib_loggers]  # 仅 `contrib_loggers` 依赖项\n```\n\n如果您是 TensorFlow 1.X 的用户，可以使用 TensorLayer 1.11.0：\n\n```bash\n# 对于 TensorLayer 1.X 的最新稳定版本\npip3 install --upgrade tensorlayer==1.11.0\n```\n\n\u003C!--\n## 使用 Docker\n\n[TensorLayer 容器](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorlayer\u002Ftensorlayer\u002F) 基于官方的 [TensorFlow 容器](https:\u002F\u002Fhub.docker.com\u002Fr\u002Ftensorflow\u002Ftensorflow\u002F) 构建：\n\n### 支持 CPU 的容器\n\n```bash\n# 对于 CPU 版本和 Python 2\ndocker pull tensorlayer\u002Ftensorlayer:latest\ndocker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest\n\n# 对于 CPU 版本和 Python 3\ndocker pull tensorlayer\u002Ftensorlayer:latest-py3\ndocker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-py3\n```\n\n### 支持 GPU 的容器\n\n这些容器需要 NVIDIA-Docker 才能正常运行：[项目链接](https:\u002F\u002Fgithub.com\u002FNVIDIA\u002Fnvidia-docker)\n\n```bash\n# 对于 GPU 版本和 Python 2\ndocker pull tensorlayer\u002Ftensorlayer:latest-gpu\nnvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-gpu\n\n# 对于 GPU 版本和 Python 3\ndocker pull tensorlayer\u002Ftensorlayer:latest-gpu-py3\nnvidia-docker run -it --rm -p 8888:8888 -p 6006:6006 -e PASSWORD=JUPYTER_NB_PASSWORD tensorlayer\u002Ftensorlayer:latest-gpu-py3\n```\n-->\n\n# 性能基准测试\n\n下表展示了在 TITAN Xp 上，使用 TensorLayer 和原生 TensorFlow 训练 [VGG16](http:\u002F\u002Fwww.robots.ox.ac.uk\u002F~vgg\u002Fresearch\u002Fvery_deep\u002F) 的速度对比。\n\n|   模式    |       库       |  数据格式  | 最大 GPU 内存占用(MB)  |最大 CPU 内存占用(MB) | 平均 CPU 内存占用(MB) | 运行时间 (秒) |\n| :-------: | :-------------: | :-----------: | :-----------------: | :-----------------: | :-----------------: | :-----------: |\n| AutoGraph | TensorFlow 2.0  | channel last  | 11833 |      2161         |        2136         |      74       |\n|           | TensorLayer 2.0 | channel last  | 11833 |      2187         |        2169         |      76       |\n|   Graph   |      Keras      | channel last  | 8677 |      2580         |        2576         |      101       |\n|   Eager   | TensorFlow 2.0  | channel last  | 8723 |      2052         |        2024         |      97       |\n|           | TensorLayer 2.0 | channel last  | 8723 |      2010         |        2007         |      95       |\n\n# 参与贡献\n\n请在提交 PR 之前阅读 [贡献者指南](CONTRIBUTING.md)。\n\n我们建议用户通过 Github Issues 报告问题。您也可以在以下 Slack 频道中讨论如何使用 TensorLayer。\n\n\u003Cbr\u002F>\n\n\u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftensorlayer\u002Fshared_invite\u002FenQtODk1NTQ5NTY1OTM5LTQyMGZhN2UzZDBhM2I3YjYzZDBkNGExYzcyZDNmOGQzNmYzNjc3ZjE3MzhiMjlkMmNiMmM3Nzc4ZDY2YmNkMTY\" target=\"\\_blank\">\n\t\u003Cdiv align=\"center\">\n\t\t\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_readme_3ea39b818553.png\" width=\"40%\"\u002F>\n\t\u003C\u002Fdiv>\n\u003C\u002Fa>\n\n\u003Cbr\u002F>\n\n# 引用 TensorLayer\n\n如果您觉得 TensorLayer 对您的项目有所帮助，请引用以下论文：\n\n```\n@article{tensorlayer2017,\n    author  = {Dong, Hao and Supratak, Akara and Mai, Luo and Liu, Fangde and Oehmichen, Axel and Yu, Simiao and Guo, Yike},\n    journal = {ACM Multimedia},\n    title   = {{TensorLayer: 一个用于高效深度学习开发的多功能库}},\n    url     = {http:\u002F\u002Ftensorlayer.org},\n    year    = {2017}\n}\n\n@inproceedings{tensorlayer2021,\n  title={Tensorlayer 3.0: 一个兼容多种后端的深度学习库},\n  author={Lai, Cheng and Han, Jiarong and Dong, Hao},\n  booktitle={2021 IEEE 国际多媒体与博览会研讨会 (ICMEW)},\n  pages={1--3},\n  year={2021},\n  organization={IEEE}\n}\n```","# TensorLayer 快速上手指南\n\nTensorLayer 是一个基于 TensorFlow 构建的新型深度学习与强化学习库，专为研究人员和工程师设计。它以简洁、灵活和高性能著称，提供了丰富的可定制神经网络层，帮助用户快速构建高级 AI 模型。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows\n*   **Python 版本**：推荐 Python 3.6+\n*   **核心依赖**：\n    *   TensorFlow 2.0.0 或更高版本\n    *   NumPy\n*   **GPU 加速（可选）**：如需使用 GPU 进行训练，需安装 CUDA 和 cuDNN。\n\n## 安装步骤\n\n您可以选择通过 `pip` 直接安装稳定版，或从源码安装开发版。国内用户建议使用清华源或阿里源以加速下载。\n\n### 1. 安装 TensorFlow\n\n首先安装后端依赖 TensorFlow。\n\n```bash\n# 安装 GPU 版本 (需预先配置好 CUDA\u002FcuDNN)\npip3 install tensorflow-gpu==2.0.0-rc1 -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 或者安装 CPU 版本\npip3 install tensorflow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2. 安装 TensorLayer\n\n**安装稳定发布版：**\n\n```bash\npip3 install tensorlayer -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n**安装开发版（包含最新功能）：**\n\n```bash\npip3 install git+https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer.git\n```\n\n**安装额外依赖（可选）：**\n\n```bash\n# 安装所有额外依赖\npip3 install --upgrade tensorlayer[all] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 仅安装 extra 依赖\npip3 install --upgrade tensorlayer[extra] -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：如果您仍在使用 TensorFlow 1.X，请安装 TensorLayer 1.11.0 版本：\n> `pip3 install --upgrade tensorlayer==1.11.0`\n\n## 基本使用\n\nTensorLayer 的 API 设计透明且灵活，类似于 PyTorch 的风格，易于上手。以下是一个最简单的全连接网络构建示例：\n\n```python\nimport tensorflow as tf\nimport tensorlayer as tl\n\n# 定义输入占位符\nx = tf.placeholder(tf.float32, shape=[None, 784], name='x')\ny_ = tf.placeholder(tf.int64, shape=[None], name='y_')\n\n# 构建网络\nnet = tl.layers.InputLayer(x, name='input_layer')\nnet = tl.layers.DenseLayer(net, n_units=800, act=tf.nn.relu, name='relu1')\nnet = tl.layers.DropoutLayer(net, keep=0.8, name='drop1')\nnet = tl.layers.DenseLayer(net, n_units=10, act=tf.identity, name='output_layer')\n\n# 定义损失函数和优化器\nce = tf.reduce_mean(tf.losses.sparse_softmax_cross_entropy(labels=y_, logits=net.outputs))\ntrain_op = tf.train.AdamOptimizer(learning_rate=0.0001).minimize(ce)\n\n# 初始化变量并启动会话\nwith tf.Session() as sess:\n    tl.layers.initialize_global_variables(sess)\n    \n    # 此处可添加训练循环代码\n    # for epoch in range(n_epoch):\n    #     ...\n    print(\"Model built successfully!\")\n```\n\n更多详细示例和教程（包括强化学习、计算机视觉等）请访问 [TensorLayer 官方示例库](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fawesome-tensorlayer)。","某自动驾驶初创团队的算法工程师正在开发一套基于视觉的实时路况感知系统，需要快速构建并部署复杂的深度神经网络模型。\n\n### 没有 TensorLayer 时\n- **底层代码冗余**：工程师需手动编写大量 TensorFlow 原生代码来定义卷积、池化等基础层，导致核心逻辑被繁琐的矩阵运算淹没，开发效率极低。\n- **模型复用困难**：每次尝试新的网络架构（如从 CNN 切换到 ResNet）都需要重构大部分代码，缺乏模块化的层级抽象，难以快速验证想法。\n- **强化学习门槛高**：若需引入强化学习进行决策优化，团队必须从零搭建环境交互接口和训练循环，耗时数周且极易出错。\n- **跨平台部署复杂**：将训练好的模型迁移到边缘设备（如华为 Ascend 或低成本 AI 芯片）时，因缺乏统一框架支持，需耗费大量精力进行算子适配和代码重写。\n\n### 使用 TensorLayer 后\n- **高层抽象提效**：利用 TensorLayer 丰富的高阶层接口，工程师仅需几行代码即可组装出复杂的神经网络，将注意力完全集中在算法创新而非底层实现上。\n- **灵活架构迭代**：借助其模块化设计，团队成员可像搭积木般自由替换网络组件，在几分钟内完成多种主流模型的切换与对比实验。\n- **开箱即用的 RL 支持**：直接调用内置的强化学习示例和高级 API（RLzoo），迅速建立起智能驾驶决策模型，大幅缩短从理论到原型的周期。\n- **无缝跨端运行**：依托 TensorLayerX 的统一框架特性，同一套代码可轻松在后端切换至 PyTorch 或 MindSpore，并直接部署到各类硬件加速卡上，无需修改核心逻辑。\n\nTensorLayer 通过极简的高层抽象与强大的跨平台能力，让科研人员能将原本数周的模型构建与部署工作压缩至数天，真正实现了“所想即所得”的高效研发流程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Ftensorlayer_TensorLayer_cbb99656.png","tensorlayer","TensorLayer Community","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Ftensorlayer_5198405d.png","A neutral open community to promote AI technology.",null,"tensorlayer@gmail.com","http:\u002F\u002Fwww.tensorlayerx.com\u002Findex_en.html?chlang=&langid=2","https:\u002F\u002Fgithub.com\u002Ftensorlayer",[81,85,89,93,97],{"name":82,"color":83,"percentage":84},"Python","#3572A5",99.6,{"name":86,"color":87,"percentage":88},"Shell","#89e051",0.2,{"name":90,"color":91,"percentage":92},"Dockerfile","#384d54",0.1,{"name":94,"color":95,"percentage":96},"Makefile","#427819",0,{"name":98,"color":99,"percentage":96},"Batchfile","#C1F12E",7391,1589,"2026-04-10T16:10:44","NOASSERTION","未说明","非必需（支持 CPU 版本）；若使用 GPU 需 NVIDIA 显卡，并安装 CUDA 和 cuDNN（具体版本取决于安装的 TensorFlow 版本，示例中为 TF 2.0）",{"notes":107,"python":108,"dependencies":109},"该工具主要基于 TensorFlow 2.0+。若需使用 TensorFlow 1.X，请安装 TensorLayer 1.11.0 版本。提供 Docker 镜像以简化环境配置（含 CPU 和 GPU 版本）。推荐使用 'pip3 install tensorlayer[all]' 安装所有额外依赖。","2.x 或 3.x (Docker 镜像区分了 Python 2 和 Python 3 版本，具体 pip 安装未限制小版本号)",[110,111],"tensorflow>=2.0.0","numpy",[14,35,15],[72,114,115,116,117,118,119,120,121,122,123,124,125,126,127,128],"deep-learning","tensorflow","neural-network","reinforcement-learning","artificial-intelligence","gan","a3c","tensorflow-tutorials","dqn","object-detection","chatbot","python","tensorflow-tutorial","imagenet","google","2026-03-27T02:49:30.150509","2026-04-16T08:19:17.647723",[132,137,142,147,152,157],{"id":133,"question_zh":134,"answer_zh":135,"source_url":136},35434,"如何在 TensorLayer 中轻松使用预训练模型（如 ResNet、VGG）？","TensorLayer 现已通过 `tl.models` 模块支持预训练模型。您可以访问 pretrained-models 仓库查看具体用法。此外，TL 提供了 `SlimNetsLayer`，可以直接使用 Google TF-Slim 的所有预训练模型（如 Inception V3）。示例代码可参考 tutorial_inceptionV3_tfslim.py。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F367",{"id":138,"question_zh":139,"answer_zh":140,"source_url":141},35435,"如何通过层名称（字符串）获取之前的某一层？","该功能已在 PR #755 中实现。现在您可以通过层名称字典式访问之前的层，例如：`net['dense1'].outputs`。底层实现采用了链表结构来管理层之间的引用。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F512",{"id":143,"question_zh":144,"answer_zh":145,"source_url":146},35436,"当初始化器（initializer）是常数时，如何避免报错\"do not specify shape\"？","当 `W_init` 不是可调用对象（即它是常数）时，不应传递 `shape` 参数。代码逻辑应调整为：如果 `W_init` 存在且不可调用，则忽略 shape 参数直接初始化；否则需要指定 shape。即：`tf.get_variable(name='W', initializer=W_init)`（无常数时无需 shape），反之则需 `tf.get_variable(name='W', shape=SHAPE, initializer=W_init)`。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F373",{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},35437,"tutorial_bipedalwalker_a3c_continuous_action.py 示例不收敛怎么办？","该问题已修复。原始 A3C 作者已更新代码以兼容 TF1.8。如果不收敛，可能是多线程同步问题，尝试引入信号量（semaphores）机制来同步线程间的权重更新，这通常能改善收敛效果。请确保使用最新版本的示例代码。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F649",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},35438,"TensorLayer 2.0 在 API 设计上有哪些主要变化？","TensorLayer 2.0 将打破向后兼容性以优化 API。主要变化包括：1. 层实例化方式改变，从 `Layer(net, params)` 变为 `Layer(params)(net)`，便于复用具有相同参数的层；2. 支持类似 Keras 的 `model.add()` 写法；3. 引入自动命名功能（`tl.layers.auto_name()`）；4. 支持类似 ResNet 的跳跃连接语法（如 `model(\"conv1\", \"conv3\").add(Concat())`）。这些改动旨在解决 1.x 版本中代码冗余和易出错的问题。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F770",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},35439,"如何正确使用 DynamicRNNLayer 并处理输出形状？","在使用 DynamicRNNLayer 时，计算最大长度（max_length）应基于内部生成的 `outputs` 变量，而不是 `self.outputs`。正确的代码片段应为：`max_length = tf.shape(outputs)[1]`，然后进行 reshape 操作：`self.outputs = tf.reshape(tf.concat(1, outputs), [-1, max_length, n_hidden])`。请参考相关示例代码中的 `EmbeddingInputlayer` 用法。","https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002FTensorLayer\u002Fissues\u002F18",[163,168,173,178,183,188,193,198,203,208,213,218,222,227,232,237,242,247,252,257],{"id":164,"version":165,"summary_zh":166,"released_at":167},280493,"3.0.0-alpha","各位好，\n\n我们非常荣幸地提前发布 TensorLayer 3.0.0-alpha 版本。该版本支持 TensorFlow 和 MindSpore 后端，并部分支持 PaddlePaddle 的算子后端，使用户能够在 NVIDIA GPU 和华为昇腾等不同硬件上运行代码。\n\n下一步，我们计划在未来支持 TensorFlow、MindSpore、PaddlePaddle 和 PyTorch 四大深度学习框架的后端。欢迎大家试用，并提出宝贵建议。\n\nTensorLayer 3.0.0-alpha 属于维护版更新。","2021-07-08T05:20:03",{"id":169,"version":170,"summary_zh":171,"released_at":172},280494,"v2.2.4","TensorLayer 2.2.4 是一个维护版本。\n\n### 新增\n\n### 变更\n\n### 依赖更新\n\n### 已弃用\n\n### 修复\n\n- 修复批归一化(#1104)\n- 修复循环神经网络(#1106)\n\n### 移除\n\n### 安全\n\n### 贡献者\n- @zsdonghao\n- @Laicheng0830(#1104)\n- @Thinkre(#1106)","2021-01-06T07:16:21",{"id":174,"version":175,"summary_zh":176,"released_at":177},280495,"2.2.3","TensorLayer 2.2.3 是一个维护版本。\n它包含大量的错误修复。\n\n### 新增\n\n### 变更\n\n### 依赖更新\n\n### 已弃用\n\n### 修复\n\n- 修复 VGG。(#1078, 1079, 1089)\n- 修复归一化层。(#1080)\n- 修复 DeCov2d 层。(#1081)\n- 修复 ModelLayer 和 LayerList 的文档。(#1083)\n- 修复 SAC 中的 bug。(#1085)\n- 修复重构：去重。(#1086)\n- 修复最大池化、批归一化数据格式问题，以及 VGG 的前向传播。(#1089)\n- 修复包信息。(#1090)\n\n### 移除\n\n### 安全\n\n### 贡献者\n- @zsdonghao\n- @tiancheng2000 (#1078 #1079 #1080 #1081)\n- @ChrisWu1997 (#1083)\n- @quantumiracle (#1085)\n- @marload (#1086)\n- @Gyx-One (#1089)\n- @Laicheng0830 (#1090)","2020-06-19T00:44:45",{"id":179,"version":180,"summary_zh":181,"released_at":182},280496,"2.2.2","TensorLayer 2.2.2 是一个维护版本。\n\n### 新增\n\n- 强化学习 (#1065)\n- Mish 激活函数 (#1068)\n\n### 修复\n\n- 修复 README\n- 修复包信息\n\n### 贡献者\n\n- @zsdonghao\n- @quantumiracle (#1065)\n- @Laicheng0830 (#1068)","2020-04-26T15:48:50",{"id":184,"version":185,"summary_zh":186,"released_at":187},280497,"2.2.1","TensorLayer 2.2.1 是一个维护版本。\n它包含多项错误修复。\n\n## 修复\n\n- 修复 README。(#1044)\n- 修复包信息。(#1046)\n- 修复构建测试（使用 YAPF 0.29）。(#1057)\n\n## 贡献者\n\n- @luomai (#1044, #1046, #1057)","2020-01-14T08:05:09",{"id":189,"version":190,"summary_zh":191,"released_at":192},280498,"v2.2.0","TensorLayer 2.2.0 是一个维护版本。 \r\n它包含多项 API 改进和错误修复。 \r\n此版本与 TensorFlow 2 RC1 兼容。 \n\n### 新增功能\n- 支持嵌套层自定义 (#PR 1015)\n- 在 InputLayer 中支持字符串 dtype (#PR 1017)\n- 在 RNN 中支持动态 RNN (#PR 1023)\n- 添加 ResNet50 静态模型 (#PR 1030)\n- 为静态模型添加性能测试代码 (#PR 1041)\n\n### 变更\n- `SpatialTransform2dAffine` 自动设置 `in_channels`\n- 支持 TensorFlow 2.0.0-rc1\n- 更新模型权重属性，现在返回其副本 (#PR 1010)\n\n### 修复\n- RNN 更新：移除警告、修复 seq_len=0 的情况、完善单元测试 (#PR 1033)\n- BN 更新：修复 BatchNorm1d 对于 2D 数据的问题，并进行了重构 (#PR 1040)\n\n### 依赖项更新\n\n### 已弃用\n\n### 修复\n- 修复 `tf.models.Model._construct_graph` 对于输出列表的情况，例如 STN 场景 (PR #1010)\n- 改进 `in_channels` 异常抛出机制。 (PR #1015)\n- 在 np.load() 中设置 allow_pickle=True (#PR 1021)\n- 移除 `private_method` 装饰器 (#PR 1025)\n- 初始化 `ModelLayer` 时复制原模型的 `trainable_weights` 和 `nontrainable_weights` (#PR 1026)\n- 初始化 `LayerList` 时复制原模型的 `trainable_weights` 和 `nontrainable_weights` (#PR 1029)\n- 移除 `model.all_layers` 中的冗余部分 (#PR 1029)\n- 将 `tf.image.resize_image_with_crop_or_pad` 替换为 `tf.image.resize_with_crop_or_pad` (#PR 1032)\n- 修复 ResNet50 静态模型中的一个 bug (#PR 1041)\n\n### 移除\n\n### 安全性\n\n### 贡献者\n- @zsdonghao\n- @luomai\n- @ChrisWu1997：#1010 #1015 #1025 #1030 #1040\n- @warshallrho：#1017 #1021 #1026 #1029 #1032 #1041\n- @ArnoldLIULJ：#1023\n- @JingqingZ：#1023","2019-09-13T23:11:21",{"id":194,"version":195,"summary_zh":196,"released_at":197},280499,"2.1.0","各位好，\n\n本次发布有三点需要说明：\n\n- [深度强化学习模型库](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexamples\u002Freinforcement_learning) 正式发布！！！\n- 我们将正式支持更多用于自然语言处理的注意力机制模型。\n- `model.conf` 配置文件已基本稳定，来自 Sipeed 的 AIoT 团队正在努力实现 TensorLayer 模型在 AI 芯片上的部署。\n\n祝大家使用愉快！\n\nTensorLayer 团队\n\n### 变更\n- 在 model.config 中添加 version_info。（PR #992）\n- 将 model config 中的 tf.nn.func 替换为 tf.nn.func.__name__。\n- 增加强化学习教程。（PR #995）\n- 添加包含简单 RNN 单元、GRU 单元和 LSTM 单元的 RNN 层。（PR #998）\n- 更新 Seq2seq 模型（#998）。\n- 新增 Seq2seqLuongAttention 模型（#998）。\n\n### 贡献者\n- @warshallrho:\n- @quantumiracle: #995\n- @Tokarev-TT-33: #995\n- @initial-h: #995\n- @Officium: #995\n- @ArnoldLIULJ: #998\n- @JingqingZ: #998\n","2019-06-16T11:55:52",{"id":199,"version":200,"summary_zh":201,"released_at":202},280500,"2.0.2","您好，我们想跟您分享一个好消息。  \n如今，AI芯片已经无处不在——从我们的手机到汽车，但对我们普通人来说，拥有一颗属于自己的AI芯片仍然是一件难事。  \n为了改变这一现状，TensorLayer团队正致力于AIoT领域，并将很快支持在低成本AI芯片（如K210）和微控制器（如STM32）上运行TensorLayer模型。具体进展如下：\n\n- [NNoM](https:\u002F\u002Fgithub.com\u002Fmajianjia\u002Fnnom) 是一个专为微控制器（MCU）设计的、基于层的高级神经网络库。我们团队与NNoM的作者正在紧密合作，努力让TensorLayer模型能够在多种MCU上运行。没错！就像**BinaryNet**那样。\n- [K210](https:\u002F\u002Fkendryte.com) 是一款低成本的AI芯片。我们正与K210的设计者以及[Sipeed](https:\u002F\u002Fgithub.com\u002Fsipeed)团队合作，使TensorLayer模型能够在K210 AI芯片上顺利运行。\n\n如果您对AIoT感兴趣，欢迎加入我们的[Slack](https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftensorlayer\u002Fshared_invite\u002FenQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc)社区交流讨论。\n\n\u003Cbr\u002F>\n\n\u003Ca href=\"https:\u002F\u002Fjoin.slack.com\u002Ft\u002Ftensorlayer\u002Fshared_invite\u002FenQtMjUyMjczMzU2Njg4LWI0MWU0MDFkOWY2YjQ4YjVhMzI5M2VlZmE4YTNhNGY1NjZhMzUwMmQ2MTc0YWRjMjQzMjdjMTg2MWQ2ZWJhYzc\" target=\"\\_blank\">\n\t\u003Cdiv align=\"center\">\n\t\t\u003Cimg src=\"https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fraw\u002Fmaster\u002Fimg\u002Fjoin_slack.png\" width=\"40%\"\u002F>\n\t\u003C\u002Fdiv>\n\u003C\u002Fa>\n\n\u003Cbr\u002F>\n\nTensorLayer、Sipeed、NNoM团队\n\n=======\n\n版本维护更新，建议升级。\n\n### 变更\n- 调整了网络配置的格式，并相应修改了相关代码和文件；同时调整了层的激活函数。（PR #980）\n- 更新了Seq2seq模块。（#989）\n\n### 修复\n- 修复了动态模型无法追踪PRelu权重梯度的问题。（PR #982）\n- 提升了关于`.weights`属性的警告信息。（提交记录）\n\n### 贡献者\n- @warshallrho：#980\n- @ArnoldLIULJ：#989\n- @1FengL：#982","2019-06-05T13:29:39",{"id":204,"version":205,"summary_zh":206,"released_at":207},280501,"2.0.1","保持发布状态，建议更新。\n\n### 变更\n- 移除 `tl.layers.initialize_global_variables(sess)`（PR #931）\n- 支持 `trainable_weights`（PR #966）\n\n### 新增\n- 层\n  - `InstanceNorm`、`InstanceNorm1d`、`InstanceNorm2d`、`InstanceNorm3d`（PR #963）\n\n### 变更\n- 移除 `tl.layers.initialize_global_variables(sess)`（PR #931）\n- 更改 `tl.layers.core`、`tl.models.core`（PR #966）\n- 将 `weights` 改为 `all_weights`、`trainable_weights`、`nontrainable_weights`\n\n### 依赖更新\n- nltk>=3.3,\u003C3.4 => nltk>=3.3,\u003C3.5（PR #892）\n- pytest>=3.6,\u003C3.11 => pytest>=3.6,\u003C4.1（PR #889）\n- yapf>=0.22,\u003C0.25 => yapf==0.25.0（PR #896）\n- imageio==2.5.0 progressbar2==3.39.3  scikit-learn==0.21.0 scikit-image==0.15.0 scipy==1.2.1 wrapt==1.11.1 pymongo==3.8.0 sphinx==2.0.1 wrapt==1.11.1 opencv-python==4.1.0.25 requests==2.21.0 tqdm==4.31.1\tlxml==4.3.3 pycodestyle==2.5.0 sphinx==2.0.1 yapf==0.27.0（PR #967）\n\n### 修复\n- 修复模型文档 @zsdonghao #957\n- 在 `BatchNorm` 中，保持均值和方差的维度以适应 `channels first` 模式（PR #963）\n\n### 贡献者\n- @warshallrho：#966\n- @zsdonghao：#931\n- @yd-yin：#963\n- @dvklopfenstein：#971","2019-05-17T12:16:25",{"id":209,"version":210,"summary_zh":211,"released_at":212},280502,"2.0.0","亲爱的各位，\n\n我们非常荣幸地宣布 TensorLayer 2.0.0 正式发布。在过去几个月里，我们对所有层进行了重构，以支持 TensorFlow 2.0.0-alpha0 和动态模式！与其它库相比，新的 API 设计使您能够更轻松地自定义层。\n\n在此，我们要感谢所有贡献者，尤其是来自北京大学和帝国理工学院的核心成员：@zsdonghao、@JingqingZ、@ChrisWu1997 和 @warshallrho。所有贡献者的名单如下。\n\n下一步，我们计划支持更多用于 3D 视觉的高级功能，例如 PointCNN 和 GraphCNN。此外，还有一些示例尚未更新，比如 A3C 和分布式训练。如果您有兴趣加入开发团队，请随时联系我们：tensorlayer@gmail.com。\n\n祝编码愉快！\n\nTensorLayer 团队\n\n\n# 参考文献\n- [TensorLayer 2.0 问题讨论](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fissues\u002F900)\n\n# 贡献列表\n\n所有贡献可参见以下内容：\n\n## 层\n- [x] **core.py:**\n  * Layer:\n    - [x] 重构 @JingqingZ 2019年1月28日\n    - [x] 测试 @JingqingZ 2019年1月31日、2019年3月6日\n    - [x] 文档编写 @JingqingZ 2019年3月6日\n  * ModelLayer:\n    - [x] 创建 @JingqingZ 2019年1月28日\n    - [x] 测试 @JingqingZ 2019年3月6日\n    - [x] 文档编写 @JingqingZ 2019年3月6日\n  * LayerList:\n    - [x] 创建 @JingqingZ 2019年1月28日 @ChrisWu1997\n    - [x] 测试 @JingqingZ 2019年3月6日\n    - [x] 文档编写 @JingqingZ 2019年3月6日\n  * LayerNode:\n    - [x] 创建 @ChrisWu1997\n    - [x] 测试 @ChrisWu1997 2019年3月22日\n    - [x] 文档编写 @ChrisWu1997 2019年3月22日\n- [x] **activation.py:**\n  * PRelu:\n    - [x] 重构 @zsdonghao 2018年12月4日 @JingqingZ 2019年3月20日\n    - [x] 测试 @JingqingZ 2019年3月20日\n    - [x] 文档编写 @JingqingZ 2019年3月20日\n  * PRelu6:\n    - [x] 重构 @zsdonghao 2018年12月4日 @JingqingZ 2019年3月20日\n    - [x] 测试 @JingqingZ 2019年3月20日\n    - [x] 文档编写 @JingqingZ 2019年3月20日\n  * PTRelu6:\n    - [x] 重构 @zsdonghao 2018年12月4日 @JingqingZ 2019年3月20日\n    - [x] 测试 @JingqingZ 2019年3月20日\n    - [x] 文档编写 @JingqingZ 2019年3月20日\n- **convolution\u002F**  \n  * AtrousConv1dLayer、AtrousConv2dLayer 和 AtrousDeConv2d 已移除，改用带有 `dilation_rate` 参数的 Conv1d\u002F2d 和 DeConv2d。（🀄️请记得更新中文文档）\n  * BinaryConv2d:\n    - [x] 重构 @zsdonghao 2018年12月5日\n    - [x] 测试 @warshallrho 2019年3月16日\n    - [x] 文档编写 @warshallrho 2019年3月20日\n  * Conv1d:\n    - [x] 重构 @zsdonghao 2019年1月16日\n    - [x] 测试 @warshallrho 2019年3月15日\n    - [x] 文档编写 @warshallrho 2019年3月17日\n  * Conv2d:\n    - [x] 重构 @zsdonghao 2019年1月16日\n    - [x] 测试 @warshallrho 2019年3月15日\n    - [x] 文档编写 @warshallrho 2019年3月17日\n  * Conv3d:\n    - [x] 新增 @zsdonghao 2019年1月16日：（🀄️请记得更新中文文档）\n    - [x] 测试 @warshallrho 2019年3月15日\n    - [x] 文档编写 @warshall","2019-05-04T17:48:22",{"id":214,"version":215,"summary_zh":216,"released_at":217},280503,"1.11.1","This is a maintenance release. All users are suggested to update.\r\n\r\n### Changed\r\n\r\n* guide for pose estimation - flipping (PR #884)\r\n* cv2 transform support 2 modes (PR #885)\r\n\r\n### Dependencies Update\r\n- pytest>=3.6,\u003C3.9 => pytest>=3.6,\u003C3.10 (PR #874)\r\n- requests>=2.19,\u003C2.20 => requests>=2.19,\u003C2.21 (PR #874)\r\n- tqdm>=4.23,\u003C4.28 => tqdm>=4.23,\u003C4.29 (PR #878)\r\n- pytest>=3.6,\u003C3.10 => pytest>=3.6,\u003C3.11 (PR #886)\r\n- pytest-xdist>=1.22,\u003C1.24 => pytest-xdist>=1.22,\u003C1.25 (PR #883)\r\n- tensorflow>=1.6,\u003C1.12 => tensorflow>=1.6,\u003C1.13 (PR #886)","2018-11-15T02:48:15",{"id":219,"version":220,"summary_zh":76,"released_at":221},280504,"1.11.0","2018-10-18T15:37:30",{"id":223,"version":224,"summary_zh":225,"released_at":226},280505,"1.11.0rc0","This release provides high-performance image augmentation API. The API is based on affine transformation. It has been proven useful to offer 80x speed up in augmenting images in the [openpose-plus](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fopenpose-plus) project.\r\n\r\n### Added\r\n- Layer:\r\n  - Release `GroupNormLayer` (PR #850)\r\n- Image affine transformation APIs\r\n  - `affine_rotation_matrix` (PR #857)\r\n  - `affine_horizontal_flip_matrix` (PR #857)\r\n  - `affine_vertical_flip_matrix` (PR #857)\r\n  - `affine_shift_matrix` (PR #857)\r\n  - `affine_shear_matrix` (PR #857)\r\n  - `affine_zoom_matrix` (PR #857)\r\n  - `affine_transform_cv2` (PR #857)\r\n  - `affine_transform_keypoints` (PR #857)\r\n- Affine transformation tutorial\r\n  - `examples\u002Fdata_process\u002Ftutorial_fast_affine_transform.py` (PR #857)\r\n\r\n### Changed\r\n\r\n- BatchNormLayer: support `data_format`\r\n\r\n### Dependencies Update\r\n- yapf>=0.22,\u003C0.24 => yapf>=0.22,\u003C0.25 (PR #829)\r\n- sphinx>=1.7,\u003C1.8 => sphinx>=1.7,\u003C1.9 (PR #842)\r\n- matplotlib>=2.2,\u003C2.3 => matplotlib>=2.2,\u003C3.1 (PR #845)\r\n- scikit-learn>=0.19,\u003C0.20 => scikit-learn>=0.19,\u003C0.21 (PR #851)\r\n- tensorflow>=1.6,\u003C1.11 => tensorflow>=1.6,\u003C1.12 (PR #853)\r\n- tqdm>=4.23,\u003C4.26 => tqdm>=4.23,\u003C4.27 (PR #862)\r\n- pydocstyle>=2.1,\u003C2.2 => pydocstyle>=2.1,\u003C3.1 (PR #866)\r\n\r\n### Deprecated\r\n\r\n### Fixed\r\n- Correct offset calculation in `tl.prepro.transform_matrix_offset_center` (PR #855)\r\n\r\n### Removed\r\n\r\n### Security\r\n\r\n### Contributors\r\n- @2wins: #850 #855\r\n- @DEKHTIARJonathan: #853\r\n- @zsdonghao: #857\r\n- @luomai: #857","2018-10-15T09:09:34",{"id":228,"version":229,"summary_zh":230,"released_at":231},280506,"1.10.1","## Important Notice\r\n**TensorLayer 1.10.x will be the last supported version of TL 1.X, big changes are upcoming and won't preserve backward compatibility. TensorLayer 1.10.x will only be updated with bugfixes on existing features. No additional feature will be implemented in TL 1.10.x**\r\n\r\n## Changelog\r\n\r\n### Added\r\n- unittest `tests\\test_timeout.py` has been added to ensure the network creation process does not freeze.\r\n\r\n### Changed\r\n - remove 'tensorboard' param, replaced by 'tensorboard_dir' in `tensorlayer\u002Futils.py` with customizable tensorboard directory (PR #819)\r\n\r\n### Removed\r\n- TL Graph API removed. Memory Leaks Issues with Graph API, will be fixed and integrated in TL 2.0 (PR #818)\r\n\r\n### Fixed\r\n- Issue #817 fixed: TL 1.10.0 - Memory Leaks and very slow network creation.\r\n\r\n### Dependencies Update\r\n- autopep8>=1.3,\u003C1.4 => autopep8>=1.3,\u003C1.5 (PR #815)\r\n- pytest-cov>=2.5,\u003C2.6 => pytest-cov>=2.5,\u003C2.7 (PR #820)\r\n- pytest>=3.6,\u003C3.8 => pytest>=3.6,\u003C3.9 (PR #823)\r\n- imageio>=2.3,\u003C2.4 => imageio>=2.3,\u003C2.5 (PR #823)\r\n\r\n### Contributors\r\n- @DEKHTIARJonathan: #815 #818 #820 #823\r\n- @ndiy: #819 \r\n- @zsdonghao: #818","2018-09-07T14:54:49",{"id":233,"version":234,"summary_zh":235,"released_at":236},280507,"1.10.1rc0","## Changelog\r\n\r\n### Added\r\n- unittest `tests\\test_timeout.py` has been added to ensure the network creation process does not freeze.\r\n\r\n### Changed\r\n - remove 'tensorboard' param, replaced by 'tensorboard_dir' in `tensorlayer\u002Futils.py` with customizable tensorboard directory (PR #819)\r\n\r\n### Removed\r\n- TL Graph API removed. Memory Leaks Issues with this API, will be fixed and integrated in TL 2.0 (PR #818)\r\n\r\n### Fixed\r\n- Issue #817 fixed: TL 1.10.0 - Memory Leaks and very slow network creation.\r\n\r\n### Dependencies Update\r\n- autopep8>=1.3,\u003C1.4 => autopep8>=1.3,\u003C1.5 (PR #815)\r\n- pytest-cov>=2.5,\u003C2.6 => pytest-cov>=2.5,\u003C2.7 (PR #820)\r\n- pytest>=3.6,\u003C3.8 => pytest>=3.6,\u003C3.9 (PR #823)\r\n- imageio>=2.3,\u003C2.4 => imageio>=2.3,\u003C2.5 (PR #823)\r\n\r\n### Contributors\r\n- @DEKHTIARJonathan: #815 #818 #820 #823\r\n- @ndiy: #819 \r\n- @zsdonghao: #818","2018-09-05T00:08:10",{"id":238,"version":239,"summary_zh":240,"released_at":241},280508,"1.10.0","## Important Notice\r\n**This release contains a memory leak issue.**\r\n\r\n### Release Note\r\nIt has been a very busy summer for the TensorLayer team. In this version, we start to support: \r\n- query and modify a neural network through an intuitive [**graph API**](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fmodules\u002Ffiles.html#tensorlayer.files.save_graph); \r\n- transparently scale-out your single-GPU training job onto multiple GPUs on a single server and multiple servers using a high-performance [**trainer**](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fmodules\u002Fdistributed.html) module. Trainer is backed by the high-performance and scalable Hovorod library, see examples [here](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexamples\u002Fdistributed_training);\r\n- reduce the memory usage of a neural network and even accelerate it using many advanced [**Network Quantization Layers**](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Fstable\u002Fmodules\u002Flayers.html#quantized-nets);\r\n- add more pre-trained models in our [**model**](https:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Fstable\u002Fmodules\u002Fmodels.html) module. \r\n\r\nMostly importantly, we decide to open-source a series of neural network application codes that have been used by practitioners. The first batch includes:\r\n- [**adaptive style transfer**](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fadaptive-style-transfer) which allows you to do almost any kind of style transfer without compromise performance.\r\n- [**flexible openpose**](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fopenpose) which allows you deeply customize your openpose network based on the actual data shapes, accuracy requirement, memory constraints and inference speed targets.\r\n- [**super resolution**](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Fsrgan) which allows you to apply this fantastic technique to medical imaging and many other important fields.\r\n\r\nAt the same time, just want to put a note ahead, we are working very hard towards the TensorLayer 2.0 release in order to synchronize with the coming TensorFlow 2.0. \r\n\r\nEnjoy this release, and we would love your feedback!\r\n\r\n\r\n### Added\r\n\r\n- API:\r\n  - Add `tl.model.vgg19` (PR #698)\r\n  - Add `tl.logging.contrib.hyperdash` (PR #739)\r\n  - Add `tl.distributed.trainer` (PR #700)\r\n  - Add `prefetch_buffer_size` to the `tl.distributed.Trainer` (PR #766)\r\n  - Add `tl.db.TensorHub` (PR ＃751)\r\n  - Add `tl.files.save_graph` (PR ＃751)\r\n  - Add `tl.files.load_graph_` (PR ＃751)\r\n  - Add `tl.files.save_graph_and_params` (PR ＃751)\r\n  - Add `tl.files.load_graph_and_params` (PR ＃751)\r\n  - Add `tl.prepro.keypoint_random_xxx` (PR #787)\r\n- Documentation:\r\n  - Add binary, ternary and dorefa links (PR #711)\r\n  - Update input scale of VGG16 and VGG19 to 0~1 (PR #736)\r\n  - Update database (PR ＃751)\r\n- Layer:\r\n  - Release SwitchNormLayer (PR #737)\r\n  - Release QuanConv2d, QuanConv2dWithBN, QuanDenseLayer, QuanDenseLayerWithBN (PR#735)\r\n  - Update Core Layer to support graph (PR ＃751)\r\n  - All Pooling layers support `data_format` (PR #809)\r\n- Setup:\r\n  - Creation of installation flaggs `all_dev`, `all_cpu_dev`, and `all_gpu_dev` (PR #739)\r\n- Examples:\r\n  - change folder struction (PR #802)\r\n  - `tutorial_models_vgg19` has been introduced to show how to use `tl.model.vgg19` (PR #698).\r\n  - fix bug of `tutorial_bipedalwalker_a3c_continuous_action.py` (PR #734, Issue #732)\r\n  - `tutorial_models_vgg16` and `tutorial_models_vgg19` has been changed the input scale from [0,255] to [0,1](PR #710)\r\n  - `tutorial_mnist_distributed_trainer.py` and `tutorial_cifar10_distributed_trainer.py` are added to explain the uses of Distributed Trainer (PR #700)\r\n  - add `tutorial_quanconv_cifar10.py` and `tutorial_quanconv_mnist.py` (PR #735)\r\n  - add `tutorial_work_with_onnx.py`(PR #775)\r\n- Applications:\r\n  - [Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1703.06868) (PR #799)\r\n\r\n### Changed\r\n\r\n  - function minibatches changed to avoid wasting samples.(PR #762)\r\n  - all the input scale in both vgg16 and vgg19 has been changed the input scale from [0,255] to [0,1](PR #710)\r\n  - Dockerfiles merged and refactored into one file (PR #747)\r\n  - LazyImports move to the most **top level** imports as possible (PR #739)\r\n  - some new test functions have been added in `test_layers_convolution.py`, `test_layers_normalization.py`, `test_layers_core.py` (PR #735)\r\n  - documentation now uses mock imports reducing the number of dependencies to compile the documentation (PR #785)\r\n  - fixed and enforced pydocstyle D210, D200, D301, D207, D403, D204, D412, D402, D300, D208 (PR #784)\r\n\r\n### Deprecated\r\n\r\n  - `tl.logging.warn` has been deprecated in favor of `tl.logging.warning` (PR #739)\r\n\r\n### Removed\r\n\r\n  - `conv_layers()`  has been removed in both vgg16 and vgg19(PR #710)\r\n\r\n### Fixed\r\n\r\n- import error caused by matplotlib on OSX (PR #705)\r\n- missing import in tl.prepro (PR #712)\r\n- Dockerfiles import error fixed - issue #733 (PR #747)\r\n- Fix a typo in `absolute_difference_error` in file: `tensorlayer\u002Fcost.py` - Issue #753 ","2018-09-01T20:06:33",{"id":243,"version":244,"summary_zh":245,"released_at":246},280509,"1.9.1","### Release Note\r\nThis version is identical to 1.9.0 but fix the issue with TF 1.10.0: https:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F51593126\u002Ftensorlayer-fails-with-tensorflow-1-10-0rc0#51593186\r\n\r\n# Changelog\r\n- Issue with tensorflow 1.10.0 fixed","2018-07-30T11:32:45",{"id":248,"version":249,"summary_zh":250,"released_at":251},280510,"1.9.0","### Release Note\r\nThis version was supposed to be release under version: **1.8.6**, due to the large amount of changes introduced in this version, it has been decided to release this version under the version: **1.9.0**\r\n\r\n# Changelog\r\n\r\n### Added\r\n- API:\r\n  - `tl.alphas` and `tl.alphas_like` added following the tf.ones\u002Fzeros and tf.zeros_like\u002Fones_like (PR #580)\r\n  - `tl.lazy_imports.LazyImport` to import heavy libraries only when necessary (PR #667)\r\n  - `tl.act.leaky_relu6` and `tl.layers.PRelu6Layer` have been deprecated (PR #686)\r\n  - `tl.act.leaky_twice_relu6` and `tl.layers.PTRelu6Layer` have been deprecated (PR #686)\r\n- CI Tool:\r\n  - [Stale Probot](https:\u002F\u002Fgithub.com\u002Fprobot\u002Fstale) added to clean stale issues (PR #573)\r\n  - [Changelog Probot](https:\u002F\u002Fgithub.com\u002Fmikz\u002Fprobot-changelog) Configuration added (PR #637)\r\n  - Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (PR #644)\r\n  - CircleCI added to build and upload Docker Containers for each PR merged and tag release (PR #648)\r\n- Decorator:\r\n  - `tl.decorators` API created including `deprecated_alias` and `private_method` (PR #660)\r\n  - `tl.decorators` API enriched with `protected_method` (PR #675)\r\n  - `tl.decorators` API enriched with `deprecated` directly raising warning and modifying documentation (PR #691)\r\n- Docker:\r\n  - Containers for each release and for each PR merged on master built (PR #648)\r\n  - Containers built in the following configurations (PR #648):\r\n    - py2 + cpu\r\n    - py2 + gpu\r\n    - py3 + cpu\r\n    - py3 + gpu\r\n- Documentation:\r\n  - Clean README.md (PR #677)\r\n  - Release semantic version added on index page (PR #633)\r\n  - Optimizers page added (PR #636)\r\n  - `AMSGrad` added on Optimizers page added (PR #636)\r\n- Layer:\r\n  - ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (PR #579)\r\n  - AtrousDeConv2dLayer added (PR #662)\r\n  - Fix bugs of using `tf.layers` in CNN (PR #686)\r\n- Optimizer:\r\n  - AMSGrad Optimizer added based on `On the Convergence of Adam and Beyond (ICLR 2018)` (PR #636)\r\n- Setup:\r\n  - Creation of installation flaggs `all`, `all_cpu`, and `all_gpu` (PR #660)\r\n- Test:\r\n  - `test_utils_predict.py` added to reproduce and fix issue #288 (PR #566)\r\n  - `Layer_DeformableConvolution_Test` added to reproduce issue #572 with deformable convolution (PR #573)\r\n  - `Array_Op_Alphas_Test` and `Array_Op_Alphas_Like_Test` added to test `tensorlayer\u002Farray_ops.py` file (PR #580)\r\n  - `test_optimizer_amsgrad.py` added to test `AMSGrad` optimizer (PR #636)\r\n  - `test_logging.py` added to insure robustness of the logging API (PR #645)\r\n  - `test_decorators.py` added (PR #660)\r\n  - `test_activations.py` added (PR #686)\r\n- Tutorials:\r\n  - `tutorial_tfslim` has been introduced to show how to use `SlimNetsLayer` (PR #560).\r\n  - add the following to all tutorials (PR #697):  \r\n    ```python\r\n    tf.logging.set_verbosity(tf.logging.DEBUG)\r\n    tl.logging.set_verbosity(tl.logging.DEBUG)\r\n    ```\r\n\r\n### Changed\r\n- Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (PR #573)\r\n- The document of LambdaLayer for linking it with ElementwiseLambdaLayer (PR #587)\r\n- RTD links point to stable documentation instead of latest used for development (PR #633)\r\n- TF Version older than 1.6.0 are officially unsupported and raises an exception (PR #644)\r\n- README.md Badges Updated with Support Python and Tensorflow Versions (PR #644)\r\n- TL logging API has been consistent with TF logging API and thread-safe (PR #645)\r\n- Relative Imports changed for absolute imports (PR #657)\r\n- `tl.files` refactored into a directory with numerous files (PR #657)\r\n- `tl.files.voc_dataset` fixed because of original Pascal VOC website was down (PR #657)\r\n- extra requirements hidden inside the library added in the project requirements (PR #657)\r\n- requirements files refactored in `requirements\u002F` directory (PR #657)\r\n- README.md and other markdown files have been refactored and cleaned. (PR #639)\r\n- Ternary Convolution Layer added in unittest (PR #658)\r\n- Convolution Layers unittests have been cleaned & refactored (PR #658)\r\n- All the tests are now using a DEBUG level verbosity when run individualy (PR #660)\r\n- `tf.identity` as activation is **ignored**, thus reducing the size of the graph by removing useless operation (PR #667)\r\n- argument dictionaries are now checked and saved within the `Layer` Base Class (PR #667)\r\n- `Layer` Base Class now presenting methods to update faultlessly `all_layers`, `all_params`, and `all_drop` (PR #675)\r\n- Input Layers have been removed from `tl.layers.core` and added to `tl.layers.inputs` (PR #675)\r\n- Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (PR #675)\r\n- Layer API is simplified, with automatic feeding `prev_layer` into `self.inputs` (PR #675)\r\n- Complete Documentation Refactoring and Reorganization (namely Layer APIs) (PR #691)\r\n\r\n### Deprecated\r\n-","2018-06-16T18:30:22",{"id":253,"version":254,"summary_zh":255,"released_at":256},280511,"1.8.6rc6","### Added\r\n- API:\r\n  - `tl.alphas` and `tl.alphas_like` added following the tf.ones\u002Fzeros and tf.zeros_like\u002Fones_like (PR #580)\r\n  - `tl.lazy_imports.LazyImport` to import heavy libraries only when necessary (PR #667)\r\n  - `tl.act.leaky_relu6` and `tl.layers.PRelu6Layer` have been deprecated (PR #686)\r\n  - `tl.act.leaky_twice_relu6` and `tl.layers.PTRelu6Layer` have been deprecated (PR #686)\r\n- CI Tool:\r\n  - [Stale Probot](https:\u002F\u002Fgithub.com\u002Fprobot\u002Fstale) added to clean stale issues (PR #573)\r\n  - [Changelog Probot](https:\u002F\u002Fgithub.com\u002Fmikz\u002Fprobot-changelog) Configuration added (PR #637)\r\n  - Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (PR #644)\r\n  - CircleCI added to build and upload Docker Containers for each PR merged and tag release (PR #648)\r\n- Decorator:\r\n  - `tl.decorators` API created including `deprecated_alias` and `private_method` (PR #660)\r\n  - `tl.decorators` API enriched with `protected_method` (PR #675)\r\n  - `tl.decorators` API enriched with `deprecated` directly raising warning and modifying documentation (PR #691)\r\n- Docker:\r\n  - Containers for each release and for each PR merged on master built (PR #648)\r\n  - Containers built in the following configurations (PR #648):\r\n    - py2 + cpu\r\n    - py2 + gpu\r\n    - py3 + cpu\r\n    - py3 + gpu\r\n- Documentation:\r\n  - Clean README.md (PR #677)\r\n  - Release semantic version added on index page (PR #633)\r\n  - Optimizers page added (PR #636)\r\n  - `AMSGrad` added on Optimizers page added (PR #636)\r\n- Layer:\r\n  - ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (PR #579)\r\n  - AtrousDeConv2dLayer added (PR #662)\r\n  - Fix bugs of using `tf.layers` in CNN (PR #686)\r\n- Optimizer:\r\n  - AMSGrad Optimizer added based on `On the Convergence of Adam and Beyond (ICLR 2018)` (PR #636)\r\n- Setup:\r\n  - Creation of installation flaggs `all`, `all_cpu`, and `all_gpu` (PR #660)\r\n- Test:\r\n  - `test_utils_predict.py` added to reproduce and fix issue #288 (PR #566)\r\n  - `Layer_DeformableConvolution_Test` added to reproduce issue #572 with deformable convolution (PR #573)\r\n  - `Array_Op_Alphas_Test` and `Array_Op_Alphas_Like_Test` added to test `tensorlayer\u002Farray_ops.py` file (PR #580)\r\n  - `test_optimizer_amsgrad.py` added to test `AMSGrad` optimizer (PR #636)\r\n  - `test_logging.py` added to insure robustness of the logging API (PR #645)\r\n  - `test_decorators.py` added (PR #660)\r\n  - `test_activations.py` added (PR #686)\r\n- Tutorials:\r\n  - `tutorial_tfslim` has been introduced to show how to use `SlimNetsLayer` (PR #560).\r\n  - add the following to all tutorials (PR #697):  \r\n    ```python\r\n    tf.logging.set_verbosity(tf.logging.DEBUG)\r\n    tl.logging.set_verbosity(tl.logging.DEBUG)\r\n    ```\r\n    \r\n### Changed\r\n- Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (PR #573)\r\n- The document of LambdaLayer for linking it with ElementwiseLambdaLayer (PR #587)\r\n- RTD links point to stable documentation instead of latest used for development (PR #633)\r\n- TF Version older than 1.6.0 are officially unsupported and raises an exception (PR #644)\r\n- README.md Badges Updated with Support Python and Tensorflow Versions (PR #644)\r\n- TL logging API has been consistent with TF logging API and thread-safe (PR #645)\r\n- Relative Imports changed for absolute imports (PR #657)\r\n- `tl.files` refactored into a directory with numerous files (PR #657)\r\n- `tl.files.voc_dataset` fixed because of original Pascal VOC website was down (PR #657)\r\n- extra requirements hidden inside the library added in the project requirements (PR #657)\r\n- requirements files refactored in `requirements\u002F` directory (PR #657)\r\n- README.md and other markdown files have been refactored and cleaned. (PR #639)\r\n- Ternary Convolution Layer added in unittest (PR #658)\r\n- Convolution Layers unittests have been cleaned & refactored (PR #658)\r\n- All the tests are now using a DEBUG level verbosity when run individualy (PR #660)\r\n- `tf.identity` as activation is **ignored**, thus reducing the size of the graph by removing useless operation (PR #667)\r\n- argument dictionaries are now checked and saved within the `Layer` Base Class (PR #667)\r\n- `Layer` Base Class now presenting methods to update faultlessly `all_layers`, `all_params`, and `all_drop` (PR #675)\r\n- Input Layers have been removed from `tl.layers.core` and added to `tl.layers.inputs` (PR #675)\r\n- Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (PR #675)\r\n- Layer API is simplified, with automatic feeding `prev_layer` into `self.inputs` (PR #675)\r\n- Complete Documentation Refactoring and Reorganization (namely Layer APIs) (PR #691)\r\n\r\n### Deprecated\r\n- `tl.layers.TimeDistributedLayer` argurment `args` is deprecated in favor of `layer_args` (PR #667)\r\n- `tl.act.leaky_relu` have been deprecated in favor of `tf.nn.leaky_relu` (PR #686)\r\n\r\n### Removed\r\n- `assert()` calls remove and rep","2018-06-15T15:49:40",{"id":258,"version":259,"summary_zh":260,"released_at":261},280512,"1.8.6rc5","# Changelog\r\n\r\n### Added\r\n- API:\r\n  - `tl.alphas` and `tl.alphas_like` added following the tf.ones\u002Fzeros and tf.zeros_like\u002Fones_like (by @DEKHTIARJonathan in #580)\r\n  - `tl.lazy_imports.LazyImport` to import heavy libraries only when necessary (by @DEKHTIARJonathan in #667)\r\n- CI Tool:\r\n  - [Stale Probot](https:\u002F\u002Fgithub.com\u002Fprobot\u002Fstale) added to clean stale issues (by @DEKHTIARJonathan in #573)\r\n  - [Changelog Probot](https:\u002F\u002Fgithub.com\u002Fmikz\u002Fprobot-changelog) Configuration added (by @DEKHTIARJonathan in #637)\r\n  - Travis Builds now handling a matrix of TF Version from TF==1.6.0 to TF==1.8.0 (by @DEKHTIARJonathan in #644)\r\n  - CircleCI added to build and upload Docker Containers for each PR merged and tag release (by @DEKHTIARJonathan in #648)\r\n- Decorator:\r\n  - `tl.decorators` API created including `deprecated_alias` and `private_method` (by @DEKHTIARJonathan in #660)\r\n  - `tl.decorators` API enriched with `protected_method` (by @DEKHTIARJonathan in #675)\r\n- Docker:\r\n  - Containers for each release and for each PR merged on master built (by @DEKHTIARJonathan in #648)\r\n  - Containers built in the following configurations (by @DEKHTIARJonathan in #648):\r\n    - py2 + cpu\r\n    - py2 + gpu\r\n    - py3 + cpu\r\n    - py3 + gpu\r\n- Documentation:\r\n  - Clean README (by @luomai in #677)\r\n  - Release semantic version added on index page (by @DEKHTIARJonathan in #633)\r\n  - Optimizers page added (by @DEKHTIARJonathan in #636)\r\n  - `AMSGrad` added on Optimizers page added (by @DEKHTIARJonathan in #636)\r\n- Layer:\r\n  - ElementwiseLambdaLayer added to use custom function to connect multiple layer inputs (by @One-sixth in #579)\r\n  - AtrousDeConv2dLayer added (by @2wins in #662)\r\n  - Fix bugs of using `tf.layers` in CNN (by @zsdonghao in #686)\r\n- Optimizer:\r\n  - AMSGrad Optimizer added based on `On the Convergence of Adam and Beyond (ICLR 2018)` (by @DEKHTIARJonathan in #636)\r\n- Setup:\r\n  - Creation of installation flaggs `all`, `all_cpu`, and `all_gpu` (by @DEKHTIARJonathan in #660)\r\n- Test:\r\n  - `test_utils_predict.py` added to reproduce and fix issue #288 (by @2wins in #566)\r\n  - `Layer_DeformableConvolution_Test` added to reproduce issue #572 with deformable convolution (by @DEKHTIARJonathan in #573)\r\n  - `Array_Op_Alphas_Test` and `Array_Op_Alphas_Like_Test` added to test `tensorlayer\u002Farray_ops.py` file (by @DEKHTIARJonathan in #580)\r\n  - `test_optimizer_amsgrad.py` added to test `AMSGrad` optimizer (by @DEKHTIARJonathan in #636)\r\n  - `test_logging.py` added to insure robustness of the logging API (by @DEKHTIARJonathan in #645)\r\n  - `test_decorators.py` added (by @DEKHTIARJonathan in #660)\r\n- Tutorials:\r\n  - `tutorial_tfslim` has been introduced to show how to use `SlimNetsLayer` (by @2wins in #560).\r\n\r\n### Changed\r\n- Tensorflow CPU & GPU dependencies moved to separated requirement files in order to allow PyUP.io to parse them (by @DEKHTIARJonathan in #573)\r\n- The document of LambdaLayer for linking it with ElementwiseLambdaLayer (by @zsdonghao in #587)\r\n- RTD links point to stable documentation instead of latest used for development (by @DEKHTIARJonathan in #633)\r\n- TF Version older than 1.6.0 are officially unsupported and raises an exception (by @DEKHTIARJonathan in #644)\r\n- Readme Badges Updated with Support Python and Tensorflow Versions (by @DEKHTIARJonathan in #644)\r\n- TL logging API has been consistent with TF logging API and thread-safe (by @DEKHTIARJonathan in #645)\r\n- Relative Imports changed for absolute imports (by @DEKHTIARJonathan in #657)\r\n- `tl.files` refactored into a directory with numerous files (by @DEKHTIARJonathan in #657)\r\n- `tl.files.voc_dataset` fixed because of original Pascal VOC website was down (by @DEKHTIARJonathan in #657)\r\n- extra requirements hidden inside the library added in the project requirements (by @DEKHTIARJonathan in #657)\r\n- requirements files refactored in `requirements\u002F` directory (by @DEKHTIARJonathan in #657)\r\n- README.md and other markdown files have been refactored and cleaned. (by @zsdonghao @DEKHTIARJonathan @luomai in #639)\r\n- Ternary Convolution Layer added in unittest (by @DEKHTIARJonathan in #658)\r\n- Convolution Layers unittests have been cleaned & refactored (by @DEKHTIARJonathan in #658)\r\n- All the tests are now using a DEBUG level verbosity when run individualy (by @DEKHTIARJonathan in #660)\r\n- `tf.identity` as activation is **ignored**, thus reducing the size of the graph by removing useless operation (by @DEKHTIARJonathan in #667)\r\n- argument dictionaries are now checked and saved within the `Layer` Base Class (by @DEKHTIARJonathan in #667)\r\n- `Layer` Base Class now presenting methods to update faultlessly `all_layers`, `all_params`, and `all_drop` (by @DEKHTIARJonathan in #675)\r\n- Input Layers have been removed from `tl.layers.core` and added to `tl.layers.inputs` (by @DEKHTIARJonathan in #675)\r\n- Input Layers are now considered as true layers in the graph (they represent a placeholder), unittests have been updated (by @DEKHTIARJonathan in #675)\r\n- Layer API is ","2018-06-07T19:51:11"]