[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-wagamamaz--tensorflow-tutorial":3,"tool-wagamamaz--tensorflow-tutorial":64},[4,17,27,35,43,56],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":23,"last_commit_at":41,"category_tags":42,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":44,"name":45,"github_repo":46,"description_zh":47,"stars":48,"difficulty_score":23,"last_commit_at":49,"category_tags":50,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,51,52,53,15,54,26,13,55],"数据工具","视频","插件","其他","音频",{"id":57,"name":58,"github_repo":59,"description_zh":60,"stars":61,"difficulty_score":10,"last_commit_at":62,"category_tags":63,"status":16},3128,"ragflow","infiniflow\u002Fragflow","RAGFlow 是一款领先的开源检索增强生成（RAG）引擎，旨在为大语言模型构建更精准、可靠的上下文层。它巧妙地将前沿的 RAG 技术与智能体（Agent）能力相结合，不仅支持从各类文档中高效提取知识，还能让模型基于这些知识进行逻辑推理和任务执行。\n\n在大模型应用中，幻觉问题和知识滞后是常见痛点。RAGFlow 通过深度解析复杂文档结构（如表格、图表及混合排版），显著提升了信息检索的准确度，从而有效减少模型“胡编乱造”的现象，确保回答既有据可依又具备时效性。其内置的智能体机制更进一步，使系统不仅能回答问题，还能自主规划步骤解决复杂问题。\n\n这款工具特别适合开发者、企业技术团队以及 AI 研究人员使用。无论是希望快速搭建私有知识库问答系统，还是致力于探索大模型在垂直领域落地的创新者，都能从中受益。RAGFlow 提供了可视化的工作流编排界面和灵活的 API 接口，既降低了非算法背景用户的上手门槛，也满足了专业开发者对系统深度定制的需求。作为基于 Apache 2.0 协议开源的项目，它正成为连接通用大模型与行业专有知识之间的重要桥梁。",77062,"2026-04-04T04:44:48",[15,14,13,26,54],{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":70,"readme_en":71,"readme_zh":72,"quickstart_zh":73,"use_case_zh":74,"hero_image_url":75,"owner_login":76,"owner_name":77,"owner_avatar_url":78,"owner_bio":79,"owner_company":80,"owner_location":81,"owner_email":80,"owner_twitter":80,"owner_website":80,"owner_url":82,"languages":80,"stars":83,"forks":84,"last_commit_at":85,"license":80,"difficulty_score":23,"env_os":86,"env_gpu":87,"env_ram":86,"env_deps":88,"category_tags":93,"github_topics":94,"view_count":23,"oss_zip_url":80,"oss_zip_packed_at":80,"status":16,"created_at":113,"updated_at":114,"faqs":115,"releases":116},3452,"wagamamaz\u002Ftensorflow-tutorial","tensorflow-tutorial","TensorFlow and Deep Learning Tutorials","tensorflow-tutorial 是一个专为深度学习初学者和开发者打造的实战指南，旨在帮助用户快速掌握 TensorFlow 框架的核心用法。面对深度学习领域复杂的理论概念与繁琐的代码实现，它通过提供从基础入门到高级应用的完整代码示例，有效降低了学习门槛。\n\n该资源涵盖了机器学习基础、MNIST 数据集介绍等前置知识，并逐步深入至多层感知机、自动编码器、卷积神经网络（CNN）、循环神经网络（RNN\u002FLSTM）、深度强化学习以及词向量嵌入等主流模型架构。其独特亮点在于不仅提供了基于原生 TensorFlow 的实现方案，还对比展示了基于 TensorLayer 高层接口的简洁写法，部分关键教程更配有中文翻译链接，极大地便利了中文用户的学习路径。此外，它还精选了包括 MIT 深度学习教材、斯坦福教程及知名技术博客在内的权威阅读清单，帮助用户构建系统的知识体系。\n\n无论是刚接触人工智能的学生、希望转型的软件开发人员，还是从事算法研究的专业人士，都能从中找到适合的练习项目。通过直接运行提供的 Notebook 代码，用户可以直观地理解模型构建、训练与优化的全过程，是开启深度学习之旅的理想","tensorflow-tutorial 是一个专为深度学习初学者和开发者打造的实战指南，旨在帮助用户快速掌握 TensorFlow 框架的核心用法。面对深度学习领域复杂的理论概念与繁琐的代码实现，它通过提供从基础入门到高级应用的完整代码示例，有效降低了学习门槛。\n\n该资源涵盖了机器学习基础、MNIST 数据集介绍等前置知识，并逐步深入至多层感知机、自动编码器、卷积神经网络（CNN）、循环神经网络（RNN\u002FLSTM）、深度强化学习以及词向量嵌入等主流模型架构。其独特亮点在于不仅提供了基于原生 TensorFlow 的实现方案，还对比展示了基于 TensorLayer 高层接口的简洁写法，部分关键教程更配有中文翻译链接，极大地便利了中文用户的学习路径。此外，它还精选了包括 MIT 深度学习教材、斯坦福教程及知名技术博客在内的权威阅读清单，帮助用户构建系统的知识体系。\n\n无论是刚接触人工智能的学生、希望转型的软件开发人员，还是从事算法研究的专业人士，都能从中找到适合的练习项目。通过直接运行提供的 Notebook 代码，用户可以直观地理解模型构建、训练与优化的全过程，是开启深度学习之旅的理想起点。","# TensorFlow and Deep Learning Tutorials\n\n\u003Cdiv align=\"center\">\n  \u003Cdiv class=\"TensorFlow\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwagamamaz_tensorflow-tutorial_readme_443cd930ce5d.png\" style=\": left; margin-left: 5px; margin-bottom: 5px;\">\u003Cbr>\u003Cbr>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n## Google's Deep Learning Tutorials \n\n - [TensorFlow Official Deep Learning Tutorial](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002F).\n - MLP with Dropout [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fmnist\u002Fbeginners\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fmnist_beginners.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer-is-simple) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer)\n - Autoencoder [TensorLayer](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#denoising-autoencoder)\n - Convolutional Neural Network [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fmnist\u002Fpros\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fmnist_pros.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#convolutional-neural-network-cnn) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#convolutional-neural-network)\n - Recurrent Neural Network [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Frecurrent\u002Findex.html#recurrent-neural-networks) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Frecurrent.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-lstm) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#lstm)\n - Deep Reinforcement Learning [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-reinforcement-learning) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#id13)\n - Sequence to Sequence [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fseq2seq\u002Findex.html#sequence-to-sequence-models)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-translation)[[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#id30)\n - Word Embedding [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fword2vec\u002Findex.html#vector-representations-of-words) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fword2vec.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-word-embedding) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#word-embedding)\n \n## Deep Learning Reading List\n\n - [MIT Deep Learning Book](http:\u002F\u002Fwww.deeplearningbook.org)\n - [Karpathy Blog](http:\u002F\u002Fkarpathy.github.io)\n - [Stanford UFLDL Tutorials](http:\u002F\u002Fdeeplearning.stanford.edu\u002Ftutorial\u002F)\n - [Colah's Blog - Word Embedding](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2014-07-NLP-RNNs-Representations\u002F) [[中文]](http:\u002F\u002Fdataunion.org\u002F9331.html)\n - [Colah's Blog - Understand LSTN](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2015-08-Understanding-LSTMs\u002F) [[门函数]](http:\u002F\u002Fmp.weixin.qq.com\u002Fs?__biz=MzI3NDExNDY3Nw==&mid=2649764821&idx=1&sn=dd325565b40fcbad6e90a9398414dede&scene=2&srcid=0505U2iFJ7tfXgB8yPfNkwrA&from=timeline&isappinstalled=0#wechat_redirect)\n \n\n## Tutorial index\n\n#### 0 - Prerequisite\n- Introduction to Machine Learning ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fml_introduction.ipynb))\n- Introduction to MNIST Dataset ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fmnist_dataset_intro.ipynb))\n\n#### 1 - Introduction\n- Hello World ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F1_Introduction\u002Fhelloworld.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F1_Introduction\u002Fhelloworld.py))\n- Basic Operations ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F1_Introduction\u002Fbasic_operations.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F1_Introduction\u002Fbasic_operations.py))\n\n#### 2 - Basic Models\n- Nearest Neighbor ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Fnearest_neighbor.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Fnearest_neighbor.py))\n- Linear Regression ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Flinear_regression.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Flinear_regression.py))\n- Logistic Regression ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Flogistic_regression.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Flogistic_regression.py))\n\n#### 3 - Neural Networks\n- Multilayer Perceptron ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fmultilayer_perceptron.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fmultilayer_perceptron.py))\n- Convolutional Neural Network ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fconvolutional_network.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fconvolutional_network.py))\n- Recurrent Neural Network (LSTM) ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Frecurrent_network.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Frecurrent_network.py))\n- Bidirectional Recurrent Neural Network (LSTM) ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fbidirectional_rnn.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fbidirectional_rnn.py))\n- Dynamic Recurrent Neural Network (LSTM) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fdynamic_rnn.py))\n- AutoEncoder ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fautoencoder.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fautoencoder.py))\n\n#### 4 - Utilities\n- Save and Restore a model ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F4_Utils\u002Fsave_restore_model.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Fsave_restore_model.py))\n- Tensorboard - Graph and loss visualization ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F4_Utils\u002Ftensorboard_basic.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Ftensorboard_basic.py))\n- Tensorboard - Advanced visualization ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Ftensorboard_advanced.py))\n\n#### 5 - Multi GPU\n- Basic Operations on multi-GPU ([notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F5_MultiGPU\u002Fmultigpu_basics.ipynb)) ([code](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F5_MultiGPU\u002Fmultigpu_basics.py))\n\n## Dataset\nSome examples require MNIST dataset for training and testing. Don't worry, this dataset will automatically be downloaded when running examples (with input_data.py).\nMNIST is a database of handwritten digits, for a quick description of that dataset, you can check [this notebook](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fmnist_dataset_intro.ipynb).\n\nOfficial Website: [http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F)\n\n\n\n## Selected Repositories\n - [jtoy\u002Fawesome-tensorflow](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fawesome-tensorflow)\n - [nlintz\u002FTensorFlow-Tutoirals](https:\u002F\u002Fgithub.com\u002Fnlintz\u002FTensorFlow-Tutorials)\n - [adatao\u002Ftensorspark](https:\u002F\u002Fgithub.com\u002Fadatao\u002Ftensorspark)\n - [ry\u002Ftensorflow-resnet](https:\u002F\u002Fgithub.com\u002Fry\u002Ftensorflow-resnet)\n\n## Tricks\n - [Tricks to use TensorLayer](https:\u002F\u002Fgithub.com\u002Fwagamamaz\u002Ftensorlayer-tricks)\n\n## Examples\n\n## Basics\n - Multi-layer perceptron (MNIST) - Classification task, see [tutorial\\_mnist\\_simple.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_simple.py).\n - Multi-layer perceptron (MNIST) - Classification using Iterator, see [method1](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mlp_dropout1.py) and [method2](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mlp_dropout2.py).\n\n\n## Computer Vision\n - Denoising Autoencoder (MNIST). Classification task, see [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py).\n - Stacked Denoising Autoencoder and Fine-Tuning (MNIST). Classification task, see [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py).\n - Convolutional Network (MNIST). Classification task, see [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py).\n - Convolutional Network (CIFAR-10). Classification task, see [tutorial\\_cifar10.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10.py) and [tutorial\\_cifar10_tfrecord.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10_tfrecord.py).\n - VGG 16 (ImageNet). Classification task, see [tutorial_vgg16.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_vgg16.py).\n - VGG 19 (ImageNet). Classification task, see [tutorial_vgg19.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_vgg19.py).\n - InceptionV3 (ImageNet). Classification task, see [tutorial\\_inceptionV3_tfslim.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_inceptionV3_tfslim.py).\n - Wide ResNet (CIFAR) by [ritchieng](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fwideresnet-tensorlayer).\n - More CNN implementations of [TF-Slim](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) can be connected to TensorLayer via SlimNetsLayer.\n - [Spatial Transformer Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02025) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FSpatial-Transformer-Nets).\n - [U-Net for brain tumor segmentation](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fu-net-brain-tumor) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fu-net-brain-tumor).\n - Variational Autoencoder (VAE) for (CelebA) by [yzwxx](https:\u002F\u002Fgithub.com\u002Fyzwxx\u002Fvae-celebA).\n - Variational Autoencoder (VAE) for (MNIST) by [BUPTLdy](https:\u002F\u002Fgithub.com\u002FBUPTLdy\u002Ftl-vae).\n - Image Captioning - Reimplementation of Google's [im2txt](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fim2txt) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FImage-Captioning).\n\n## Natural Language Processing\n - Recurrent Neural Network (LSTM). Apply multiple LSTM to PTB dataset for language modeling, see [tutorial_ptb_lstm.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_ptb_lstm.py) and [tutorial\\_ptb\\_lstm\\_state\\_is_tuple.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_ptb_lstm_state_is_tuple.py).\n - Word Embedding (Word2vec). Train a word embedding matrix, see [tutorial\\_word2vec_basic.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial\\_word2vec_basic.py).\n - Restore Embedding matrix. Restore a pre-train embedding matrix, see [tutorial\\_generate_text.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_generate_text.py).\n - Text Generation. Generates new text scripts, using LSTM network, see [tutorial\\_generate_text.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_generate_text.py).\n - Chinese Text Anti-Spam by [pakrchen](https:\u002F\u002Fgithub.com\u002Fpakrchen\u002Ftext-antispam).\n - [Chatbot in 200 lines of code](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fseq2seq-chatbot) for [Seq2Seq](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fmodules\u002Flayers.html#simple-seq2seq).\n - FastText Sentence Classification (IMDB), see [tutorial\\_imdb\\_fasttext.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_imdb_fasttext.py) by [tomtung](https:\u002F\u002Fgithub.com\u002Ftomtung).\n\n## Adversarial Learning\n- DCGAN (CelebA). Generating images by [Deep Convolutional Generative Adversarial Networks](http:\u002F\u002Farxiv.org\u002Fabs\u002F1511.06434) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fdcgan).\n- [Generative Adversarial Text to Image Synthesis](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image).\n- [Unsupervised Image to Image Translation with Generative Adversarial Networks](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FUnsup-Im2Im) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FUnsup-Im2Im).\n- [Improved CycleGAN](https:\u002F\u002Fgithub.com\u002Fluoxier\u002FCycleGAN_Tensorlayer) with resize-convolution by [luoxier](https:\u002F\u002Fgithub.com\u002Fluoxier\u002FCycleGAN_Tensorlayer)\n- [Super Resolution GAN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1609.04802) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FSRGAN).\n- [DAGAN: Fast Compressed Sensing MRI Reconstruction](https:\u002F\u002Fgithub.com\u002FnebulaV\u002FDAGAN) by [nebulaV](https:\u002F\u002Fgithub.com\u002FnebulaV\u002FDAGAN).\n\n## Reinforcement Learning\n - Policy Gradient \u002F Network (Atari Ping Pong), see [tutorial\\_atari_pong.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_atari_pong.py).\n - Deep Q-Network (Frozen lake), see [tutorial\\_frozenlake_dqn.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_frozenlake_dqn.py).\n - Q-Table learning algorithm (Frozen lake), see [tutorial\\_frozenlake\\_q_table.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_frozenlake_q_table.py).\n - Asynchronous Policy Gradient using TensorDB (Atari Ping Pong) by [nebulaV](https:\u002F\u002Fgithub.com\u002Fakaraspt\u002Ftl_paper).\n - AC for discrete action space (Cartpole), see [tutorial\\_cartpole_ac.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cartpole_ac.py).\n - A3C for continuous action space (Bipedal Walker), see [tutorial\\_bipedalwalker_a3c*.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_bipedalwalker_a3c_continuous_action.py).\n - [DAGGER](https:\u002F\u002Fwww.cs.cmu.edu\u002F%7Esross1\u002Fpublications\u002FRoss-AIStats11-NoRegret.pdf) for ([Gym Torcs](https:\u002F\u002Fgithub.com\u002Fugo-nama-kun\u002Fgym_torcs)) by [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FImitation-Learning-Dagger-Torcs).\n - [TRPO](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.05477) for continuous and discrete action space by [jjkke88](https:\u002F\u002Fgithub.com\u002Fjjkke88\u002FRL_toolbox).\n\n## Miscellaneous\n - Distributed Training. [mnist](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_distributed.py) and [imagenet](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_imagenet_inceptionV3_distributed.py) by [jorgemf](https:\u002F\u002Fgithub.com\u002Fjorgemf).\n - Merge TF-Slim into TensorLayer. [tutorial\\_inceptionV3_tfslim.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_inceptionV3_tfslim.py).\n - Merge Keras into TensorLayer. [tutorial_keras.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_keras.py).\n - Data augmentation with TFRecord. Effective way to load and pre-process data, see [tutorial_tfrecord*.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexample) and [tutorial\\_cifar10_tfrecord.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10_tfrecord.py).\n - Data augmentation with TensorLayer, see [tutorial\\_image_preprocess.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_image_preprocess.py).\n - TensorDB by [fangde](https:\u002F\u002Fgithub.com\u002Ffangde) see [here](https:\u002F\u002Fgithub.com\u002Fakaraspt\u002Ftl_paper).\n- A simple web service - [TensorFlask](https:\u002F\u002Fgithub.com\u002FJoelKronander\u002FTensorFlask) by [JoelKronander](https:\u002F\u002Fgithub.com\u002FJoelKronander).\n- Float 16 half-precision model, see [tutorial\\_mnist_float16.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_float16.py)\n\n \n### Useful Links\n - [Tricks to use TensorLayer](https:\u002F\u002Fgithub.com\u002Fwagamamaz\u002Ftensorlayer-tricks)\n","# TensorFlow与深度学习教程\n\n\u003Cdiv align=\"center\">\n  \u003Cdiv class=\"TensorFlow\">\n    \u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwagamamaz_tensorflow-tutorial_readme_443cd930ce5d.png\" style=\": left; margin-left: 5px; margin-bottom: 5px;\">\u003Cbr>\u003Cbr>\n  \u003C\u002Fdiv>\n\u003C\u002Fdiv>\n\n## Google的深度学习教程\n\n - [TensorFlow官方深度学习教程](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002F)。\n - 带Dropout的多层感知机 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fmnist\u002Fbeginners\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fmnist_beginners.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer-is-simple) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer)\n - 自编码器 [TensorLayer](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#tensorlayer) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#denoising-autoencoder)\n - 卷积神经网络 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fmnist\u002Fpros\u002Findex.html) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fmnist_pros.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#convolutional-neural-network-cnn) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#convolutional-neural-network)\n - 循环神经网络 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Frecurrent\u002Findex.html#recurrent-neural-networks) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Frecurrent.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-lstm) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#lstm)\n - 深度强化学习 [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-reinforcement-learning) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#id13)\n - 序列到序列 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fseq2seq\u002Findex.html#sequence-to-sequence-models)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-translation)[[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#id30)\n - 词嵌入 [TensorFlow](https:\u002F\u002Fwww.tensorflow.org\u002Fversions\u002Fmaster\u002Ftutorials\u002Fword2vec\u002Findex.html#vector-representations-of-words) [[中文]](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002Ftutorials\u002Fword2vec.html)  [TensorLayer](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fuser\u002Ftutorial.html#understand-word-embedding) [[中文]](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002Fzh\u002Flatest\u002Fuser\u002Ftutorial.html#word-embedding)\n \n## 复习深度学习的阅读清单\n\n - [MIT深度学习书籍](http:\u002F\u002Fwww.deeplearningbook.org)\n - [Karpathy博客](http:\u002F\u002Fkarpathy.github.io)\n - [斯坦福UFLDL教程](http:\u002F\u002Fdeeplearning.stanford.edu\u002Ftutorial\u002F)\n - [Colah的博客——词嵌入](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2014-07-NLP-RNNs-Representations\u002F) [[中文]](http:\u002F\u002Fdataunion.org\u002F9331.html)\n - [Colah的博客——理解LSTM](http:\u002F\u002Fcolah.github.io\u002Fposts\u002F2015-08-Understanding-LSTMs\u002F) [[门控机制]](http:\u002F\u002Fmp.weixin.qq.com\u002Fs?__biz=MzI3NDExNDY3Nw==&mid=2649764821&idx=1&sn=dd325565b40fcbad6e90a9398414dede&scene=2&srcid=0505U2iFJ7tfXgB8yPfNkwrA&from=timeline&isappinstalled=0#wechat_redirect)\n\n## 教程索引\n\n#### 0 - 前置知识\n- 机器学习简介（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fml_introduction.ipynb)）\n- MNIST 数据集简介（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fmnist_dataset_intro.ipynb)）\n\n#### 1 - 入门\n- Hello World（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F1_Introduction\u002Fhelloworld.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F1_Introduction\u002Fhelloworld.py)）\n- 基本操作（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F1_Introduction\u002Fbasic_operations.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F1_Introduction\u002Fbasic_operations.py)）\n\n#### 2 - 基础模型\n- 最近邻算法（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Fnearest_neighbor.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Fnearest_neighbor.py)）\n- 线性回归（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Flinear_regression.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Flinear_regression.py)）\n- 逻辑回归（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F2_BasicModels\u002Flogistic_regression.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F2_BasicModels\u002Flogistic_regression.py)）\n\n#### 3 - 神经网络\n- 多层感知机（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fmultilayer_perceptron.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fmultilayer_perceptron.py)）\n- 卷积神经网络（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fconvolutional_network.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fconvolutional_network.py)）\n- 循环神经网络（LSTM）（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Frecurrent_network.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Frecurrent_network.py)）\n- 双向循环神经网络（LSTM）（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fbidirectional_rnn.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fbidirectional_rnn.py)）\n- 动态循环神经网络（LSTM）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fdynamic_rnn.py)）\n- 自编码器（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F3_NeuralNetworks\u002Fautoencoder.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F3_NeuralNetworks\u002Fautoencoder.py)）\n\n#### 4 - 工具\n- 保存和恢复模型（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F4_Utils\u002Fsave_restore_model.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Fsave_restore_model.py)）\n- TensorBoard - 图形与损失可视化（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F4_Utils\u002Ftensorboard_basic.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Ftensorboard_basic.py)）\n- TensorBoard - 高级可视化（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F4_Utils\u002Ftensorboard_advanced.py)）\n\n#### 5 - 多 GPU\n- 多 GPU 上的基本操作（[笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F5_MultiGPU\u002Fmultigpu_basics.ipynb)）（[代码](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fexamples\u002F5_MultiGPU\u002Fmultigpu_basics.py)）\n\n## 数据集\n部分示例需要使用 MNIST 数据集进行训练和测试。不用担心，运行这些示例时，该数据集会自动下载（通过 input_data.py 脚本）。MNIST 是一个手写数字数据库，关于该数据集的简要介绍，可以查看 [此笔记本](https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples\u002Fblob\u002Fmaster\u002Fnotebooks\u002F0_Prerequisite\u002Fmnist_dataset_intro.ipynb)。\n\n官方网站：[http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F](http:\u002F\u002Fyann.lecun.com\u002Fexdb\u002Fmnist\u002F)\n\n\n\n## 精选仓库\n - [jtoy\u002Fawesome-tensorflow](https:\u002F\u002Fgithub.com\u002Fjtoy\u002Fawesome-tensorflow)\n - [nlintz\u002FTensorFlow-Tutoirals](https:\u002F\u002Fgithub.com\u002Fnlintz\u002FTensorFlow-Tutorials)\n - [adatao\u002Ftensorspark](https:\u002F\u002Fgithub.com\u002Fadatao\u002Ftensorspark)\n - [ry\u002Ftensorflow-resnet](https:\u002F\u002Fgithub.com\u002Fry\u002Ftensorflow-resnet)\n\n## 技巧\n - [使用 TensorLayer 的技巧](https:\u002F\u002Fgithub.com\u002Fwagamamaz\u002Ftensorlayer-tricks)\n\n## 示例\n\n## 基础\n - 多层感知机（MNIST）- 分类任务，参见 [tutorial_mnist_simple.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_simple.py)。\n - 多层感知机（MNIST）- 使用迭代器进行分类，参见 [method1](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mlp_dropout1.py) 和 [method2](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mlp_dropout2.py)。\n\n## 计算机视觉\n - 去噪自编码器（MNIST）。分类任务，参见 [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py)。\n - 堆叠去噪自编码器与微调（MNIST）。分类任务，参见 [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py)。\n - 卷积网络（MNIST）。分类任务，参见 [tutorial_mnist.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist.py)。\n - 卷积网络（CIFAR-10）。分类任务，参见 [tutorial_cifar10.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10.py) 和 [tutorial_cifar10_tfrecord.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10_tfrecord.py)。\n - VGG 16（ImageNet）。分类任务，参见 [tutorial_vgg16.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_vgg16.py)。\n - VGG 19（ImageNet）。分类任务，参见 [tutorial_vgg19.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_vgg19.py)。\n - InceptionV3（ImageNet）。分类任务，参见 [tutorial_inceptionV3_tfslim.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_inceptionV3_tfslim.py)。\n - CIFAR 数据集上的 Wide ResNet，由 [ritchieng](https:\u002F\u002Fgithub.com\u002Fritchieng\u002Fwideresnet-tensorlayer) 实现。\n - 更多来自 [TF-Slim](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim) 的卷积神经网络实现可以通过 SlimNetsLayer 连接到 TensorLayer。\n - [空间变换网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1506.02025)，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FSpatial-Transformer-Nets) 实现。\n - 用于脑肿瘤分割的 U-Net，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fu-net-brain-tumor) 实现。\n - 变分自编码器（VAE）用于 CelebA 数据集，由 [yzwxx](https:\u002F\u002Fgithub.com\u002Fyzwxx\u002Fvae-celebA) 实现。\n - 变分自编码器（VAE）用于 MNIST 数据集，由 [BUPTLdy](https:\u002F\u002Fgithub.com\u002FBUPTLdy\u002Ftl-vae) 实现。\n - 图像字幕生成——Google 的 [im2txt](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fim2txt) 的重新实现，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FImage-Captioning) 完成。\n\n## 自然语言处理\n - 循环神经网络（LSTM）。将多个 LSTM 应用于 PTB 数据集进行语言建模，参见 [tutorial_ptb_lstm.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_ptb_lstm.py) 和 [tutorial_ptb_lstm_state_is_tuple.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_ptb_lstm_state_is_tuple.py)。\n - 词嵌入（Word2vec）。训练词嵌入矩阵，参见 [tutorial_word2vec_basic.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_word2vec_basic.py)。\n - 恢复嵌入矩阵。恢复预训练的词嵌入矩阵，参见 [tutorial_generate_text.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_generate_text.py)。\n - 文本生成。使用 LSTM 网络生成新的文本内容，参见 [tutorial_generate_text.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_generate_text.py)。\n - 中文文本反垃圾邮件，由 [pakrchen](https:\u002F\u002Fgithub.com\u002Fpakrchen\u002Ftext-antispam) 实现。\n - [200 行代码实现的聊天机器人](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fseq2seq-chatbot)，基于 [Seq2Seq](http:\u002F\u002Ftensorlayer.readthedocs.io\u002Fen\u002Flatest\u002Fmodules\u002Flayers.html#simple-seq2seq)。\n - FastText 句子分类（IMDB），由 [tomtung](https:\u002F\u002Fgithub.com\u002Ftomtung) 实现，参见 [tutorial_imdb_fasttext.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_imdb_fasttext.py)。\n\n## 对抗学习\n - DCGAN（CelebA）。使用深度卷积生成对抗网络生成图像，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Fdcgan) 实现。\n - [生成式对抗文本到图像合成](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image)，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftext-to-image) 实现。\n - [无监督图像到图像转换与生成对抗网络](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FUnsup-Im2Im)，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FUnsup-Im2Im) 实现。\n - 改进的 CycleGAN，带有重采样卷积操作，由 [luoxier](https:\u002F\u002Fgithub.com\u002Fluoxier\u002FCycleGAN_Tensorlayer) 实现。\n - 超分辨率 GAN（SRGAN），由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FSRGAN) 实现。\n - DAGAN：快速压缩感知 MRI 重建，由 [nebulaV](https:\u002F\u002Fgithub.com\u002FnebulaV\u002FDAGAN) 实现。\n\n## 强化学习\n - 策略梯度\u002F网络（Atari 乒乓球游戏），参见 [tutorial_atari_pong.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_atari_pong.py)。\n - 深度 Q 网络（Frozen Lake），参见 [tutorial_frozenlake_dqn.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_frozenlake_dqn.py)。\n - Q 表学习算法（Frozen Lake），参见 [tutorial_frozenlake_q_table.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_frozenlake_q_table.py)。\n - 使用 TensorDB 的异步策略梯度（Atari 乒乓球游戏），由 [nebulaV](https:\u002F\u002Fgithub.com\u002Fakaraspt\u002Ftl_paper) 实现。\n - AC 用于离散动作空间（Cartpole），参见 [tutorial_cartpole_ac.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cartpole_ac.py)。\n - A3C 用于连续动作空间（Bipedal Walker），参见 [tutorial_bipedalwalker_a3c*.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_bipedalwalker_a3c_continuous_action.py)。\n - [DAGGER](https:\u002F\u002Fwww.cs.cmu.edu\u002F~sross1\u002Fpublications\u002FRoss-AIStats11-NoRegret.pdf) 用于 ([Gym Torcs](https:\u002F\u002Fgithub.com\u002Fugo-nama-kun\u002Fgym_torcs))，由 [zsdonghao](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002FImitation-Learning-Dagger-Torcs) 实现。\n - [TRPO](https:\u002F\u002Farxiv.org\u002Fabs\u002F1502.05477) 用于连续和离散动作空间，由 [jjkke88](https:\u002F\u002Fgithub.com\u002Fjjkke88\u002FRL_toolbox) 实现。\n\n## 其他\n - 分布式训练。由[jorgemf](https:\u002F\u002Fgithub.com\u002Fjorgemf)提供的[MNIST](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_distributed.py)和[ImageNet](https:\u002F\u002Fgithub.com\u002Ftensorlayer\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_imagenet_inceptionV3_distributed.py)示例。\n - 将TF-Slim合并到TensorLayer中。[tutorial_inceptionV3_tfslim.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_inceptionV3_tfslim.py)。\n - 将Keras合并到TensorLayer中。[tutorial_keras.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_keras.py)。\n - 使用TFRecord进行数据增强。这是一种高效的数据加载和预处理方式，详见[tutorial_tfrecord*.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Ftree\u002Fmaster\u002Fexample)和[tutorial_cifar10_tfrecord.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_cifar10_tfrecord.py)。\n - 使用TensorLayer进行数据增强，详见[tutorial_image_preprocess.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_image_preprocess.py)。\n - TensorDB由[fangde](https:\u002F\u002Fgithub.com\u002Ffangde)开发，详情请见[这里](https:\u002F\u002Fgithub.com\u002Fakaraspt\u002Ftl_paper)。\n - 一个简单的Web服务——[TensorFlask](https:\u002F\u002Fgithub.com\u002FJoelKronander\u002FTensorFlask)，由[JoelKronander](https:\u002F\u002Fgithub.com\u002FJoelKronander)开发。\n - 半精度浮点模型（float 16），详见[tutorial_mnist_float16.py](https:\u002F\u002Fgithub.com\u002Fzsdonghao\u002Ftensorlayer\u002Fblob\u002Fmaster\u002Fexample\u002Ftutorial_mnist_float16.py)。\n\n \n### 有用链接\n - [使用TensorLayer的技巧](https:\u002F\u002Fgithub.com\u002Fwagamamaz\u002Ftensorlayer-tricks)","# TensorFlow 教程快速上手指南\n\n本指南基于 `tensorflow-tutorial` 项目整理，旨在帮助开发者快速掌握 TensorFlow 基础操作、经典模型构建及实用工具。\n\n## 1. 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**：Linux, macOS 或 Windows (推荐 Linux 以获得最佳性能)\n*   **Python 版本**：Python 3.6 - 3.9 (具体版本取决于安装的 TensorFlow 版本)\n*   **前置依赖**：\n    *   `pip` (Python 包管理工具)\n    *   `git` (用于克隆代码仓库)\n    *   `Jupyter Notebook` (可选，用于运行 `.ipynb` 格式的交互式教程)\n\n> **提示**：部分示例需要 MNIST 数据集。运行代码时，脚本会自动通过 `input_data.py` 下载该数据集，无需手动准备。\n\n## 2. 安装步骤\n\n### 2.1 克隆项目代码\n首先，将教程代码仓库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Faymericdamien\u002FTensorFlow-Examples.git\ncd TensorFlow-Examples\n```\n\n### 2.2 安装 TensorFlow\n建议使用国内镜像源加速安装。以下命令使用清华大学镜像源安装最新稳定版的 TensorFlow：\n\n```bash\npip install tensorflow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n如果您需要使用 GPU 加速（需确保已安装对应的 NVIDIA 驱动和 CUDA Toolkit），请安装 `tensorflow-gpu`（TF 2.x 版本中通常包含在 `tensorflow` 包内，只需额外安装 `nvidia-cuda-toolkit` 相关依赖）：\n\n```bash\n# 对于较新版本的 TensorFlow，GPU 支持通常包含在主包中，若需特定版本可参考官方文档\npip install tensorflow -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n### 2.3 安装其他依赖\n部分高级示例可能需要 `matplotlib`, `numpy`, `pillow` 等库，建议一并安装：\n\n```bash\npip install numpy matplotlib pillow jupyter -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 3. 基本使用\n\n本项目提供了从\"Hello World\"到复杂神经网络的各种示例，分为 **Notebook (交互式)** 和 **Python 脚本** 两种形式。\n\n### 3.1 运行第一个示例：Hello World\n这是最基础的入门示例，用于验证环境是否配置正确。\n\n**方式一：运行 Python 脚本**\n```bash\npython examples\u002F1_Introduction\u002Fhelloworld.py\n```\n\n**方式二：使用 Jupyter Notebook**\n启动 Jupyter 服务并打开对应的笔记本书写：\n```bash\njupyter notebook notebooks\u002F1_Introduction\u002Fhelloworld.ipynb\n```\n\n### 3.2 尝试基础模型：线性回归\n进入 `2_BasicModels` 目录，体验经典的机器学习算法实现。\n\n```bash\n# 运行线性回归示例\npython examples\u002F2_BasicModels\u002Flinear_regression.py\n\n# 或者运行逻辑回归示例\npython examples\u002F2_BasicModels\u002Flogistic_regression.py\n```\n\n### 3.3 构建神经网络\n项目核心部分位于 `3_NeuralNetworks`，包含多层感知机 (MLP)、卷积神经网络 (CNN) 和循环神经网络 (RNN\u002FLSTM)。\n\n**运行 CNN 示例 (手写数字识别)：**\n```bash\npython examples\u002F3_NeuralNetworks\u002Fconvolutional_network.py\n```\n*注：首次运行时会自动下载 MNIST 数据集。*\n\n**运行 RNN (LSTM) 示例：**\n```bash\npython examples\u002F3_NeuralNetworks\u002Frecurrent_network.py\n```\n\n### 3.4 可视化与工具\n使用 TensorBoard 查看训练过程中的损失变化和计算图。\n\n```bash\n# 运行生成日志的脚本\npython examples\u002F4_Utils\u002Ftensorboard_basic.py\n\n# 启动 TensorBoard 查看可视化结果 (将 \u003Clog_dir> 替换为脚本输出的日志路径)\ntensorboard --logdir=\u002Ftmp\u002Ftensorflow_logs\n```\n然后在浏览器中访问显示的地址（通常是 `http:\u002F\u002Flocalhost:6006`）。\n\n### 3.5 进阶资源\n*   **多 GPU 训练**：参考 `5_MultiGPU` 目录下的示例。\n*   **中文文档补充**：本项目部分概念可参考极客学院整理的 [TensorFlow 中文文档](http:\u002F\u002Fwiki.jikexueyuan.com\u002Fproject\u002Ftensorflow-zh\u002F) 或 [TensorLayer 中文教程](http:\u002F\u002Ftensorlayercn.readthedocs.io\u002F) 以获取更详细的理论解释。","某初创公司的算法工程师小李需要在两周内为电商客户构建一个商品评论情感分析模型，但他对 TensorFlow 的底层 API 和神经网络架构尚不熟悉。\n\n### 没有 tensorflow-tutorial 时\n- **入门门槛极高**：面对官方晦涩的文档，小李花费三天时间仍无法理清从数据加载到模型训练的标准代码结构，甚至卡在\"Hello World\"环境配置上。\n- **架构实现困难**：试图手动编写 LSTM（长短期记忆网络）处理文本序列时，因不理解门控机制的代码实现，导致模型无法收敛且报错频发。\n- **资源查找分散**：为了理解词向量（Word Embedding）和 Dropout 等关键概念，需要在 GitHub、博客和论坛间反复跳转，缺乏系统化的中文对照资料，学习效率极低。\n- **试错成本高昂**：由于缺乏标准的 MNIST 数据集引入示例和预训练参考，每次调整参数都像“盲人摸象”，严重拖慢了项目交付进度。\n\n### 使用 tensorflow-tutorial 后\n- **快速上手实践**：直接复用教程中清晰的\"Hello World\"和 MNIST 数据集引入 Notebook，半天内便跑通了第一个深度学习流程，确立了开发基准。\n- **架构按需调用**：参考教程中现成的 RNN 和 TensorLayer 封装代码，迅速理解了 LSTM 的实现逻辑，并成功将其迁移到评论数据的情感分类任务中。\n- **知识体系完整**：利用教程提供的中英文双语索引和深度学习阅读清单，系统化掌握了从 MLP 到 Seq2Seq 的核心概念，不再需要碎片化搜索。\n- **迭代效率倍增**：基于教程提供的标准代码模板进行微调，快速完成了模型原型的验证与优化，最终提前两天高质量交付了项目。\n\ntensorflow-tutorial 通过将复杂的理论转化为可执行的代码案例，极大地缩短了开发者从理论学习到实际落地的路径。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fwagamamaz_tensorflow-tutorial_443cd930.png","wagamamaz","zhangrui","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fwagamamaz_9cf3b384.jpg","AI@CMU, UCLA",null,"Beijing","https:\u002F\u002Fgithub.com\u002Fwagamamaz",731,208,"2026-03-24T17:11:53","未说明","未说明（包含多 GPU 操作示例，暗示支持 NVIDIA GPU，但无具体型号或显存要求）",{"notes":89,"python":86,"dependencies":90},"本项目为 TensorFlow 和 TensorLayer 的教程集合。运行示例时会自动下载 MNIST 等数据集。部分高级示例（如 VGG, InceptionV3）可能需要较大的计算资源。代码提供 Python 脚本和 Jupyter Notebook 两种格式。",[91,92],"tensorflow","tensorlayer",[13,26],[95,96,97,91,92,98,99,100,101,102,103,104,105,106,107,108,109,110,111,112],"recurrent-neural-networks","convolutional-neural-networks","deep-learning-tutorial","keras","deep-reinforcement-learning","tensorflow-tutorials","deep-learning","machine-learning","notebook","autoencoder","multi-layer-perceptron","reinforcement-learning","tflearn","neural-networks","neural-network","neural-machine-translation","nlp","cnn","2026-03-27T02:49:30.150509","2026-04-06T08:08:37.403015",[],[]]