[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-swyxio--ai-notes":3,"tool-swyxio--ai-notes":62},[4,18,26,36,46,54],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",159267,2,"2026-04-17T11:29:14",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":42,"last_commit_at":43,"category_tags":44,"status":17},8272,"opencode","anomalyco\u002Fopencode","OpenCode 是一款开源的 AI 编程助手（Coding Agent），旨在像一位智能搭档一样融入您的开发流程。它不仅仅是一个代码补全插件，而是一个能够理解项目上下文、自主规划任务并执行复杂编码操作的智能体。无论是生成全新功能、重构现有代码，还是排查难以定位的 Bug，OpenCode 都能通过自然语言交互高效完成，显著减少开发者在重复性劳动和上下文切换上的时间消耗。\n\n这款工具专为软件开发者、工程师及技术研究人员设计，特别适合希望利用大模型能力来提升编码效率、加速原型开发或处理遗留代码维护的专业人群。其核心亮点在于完全开源的架构，这意味着用户可以审查代码逻辑、自定义行为策略，甚至私有化部署以保障数据安全，彻底打破了传统闭源 AI 助手的“黑盒”限制。\n\n在技术体验上，OpenCode 提供了灵活的终端界面（Terminal UI）和正在测试中的桌面应用程序，支持 macOS、Windows 及 Linux 全平台。它兼容多种包管理工具，安装便捷，并能无缝集成到现有的开发环境中。无论您是追求极致控制权的资深极客，还是渴望提升产出的独立开发者，OpenCode 都提供了一个透明、可信",144296,1,"2026-04-16T14:50:03",[13,45],"插件",{"id":47,"name":48,"github_repo":49,"description_zh":50,"stars":51,"difficulty_score":32,"last_commit_at":52,"category_tags":53,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":55,"name":56,"github_repo":57,"description_zh":58,"stars":59,"difficulty_score":32,"last_commit_at":60,"category_tags":61,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[45,13,15,14],{"id":63,"github_repo":64,"name":65,"description_en":66,"description_zh":67,"ai_summary_zh":67,"readme_en":68,"readme_zh":69,"quickstart_zh":70,"use_case_zh":71,"hero_image_url":72,"owner_login":73,"owner_name":74,"owner_avatar_url":75,"owner_bio":76,"owner_company":77,"owner_location":78,"owner_email":79,"owner_twitter":80,"owner_website":81,"owner_url":82,"languages":83,"stars":88,"forks":89,"last_commit_at":90,"license":91,"difficulty_score":42,"env_os":92,"env_gpu":93,"env_ram":93,"env_deps":94,"category_tags":97,"github_topics":99,"view_count":32,"oss_zip_url":79,"oss_zip_packed_at":79,"status":17,"created_at":107,"updated_at":108,"faqs":109,"releases":110},8552,"swyxio\u002Fai-notes","ai-notes","notes for software engineers getting up to speed on new AI developments. Serves as datastore for https:\u002F\u002Flatent.space writing, and product brainstorming, but has cleaned up canonical references under the \u002FResources folder.","ai-notes 是一个专为软件工程师打造的 AI 技术知识库，旨在帮助从业者快速掌握生成式 AI 与大语言模型领域的最新动态。面对人工智能技术迭代极快、信息碎片化严重的痛点，它将原本分散的技术资讯、产品构思及权威参考进行了系统化梳理与沉淀，解决了开发者难以高效追踪前沿进展的问题。\n\n该资源库内容覆盖广泛，不仅包含文本生成（如 GPT-4）、代码辅助、语音处理等核心领域的深度笔记，还特别强化了图像生成（如 Stable Diffusion）的基础设施与硬件扩展知识。其独特亮点在于提供了大量经过验证的“提示词速查表”（Swipe Files），涵盖文本与图像创作的高质量范例，并持续更新关于\"AI 智能体”等新兴方向的探索性文档。此外，它还整理了法律伦理、行业社区及关键人物观点，形成了从理论到实践的全景视图。\n\nai-notes 非常适合希望深入理解 AI 工程化的开发者、研究人员以及需要灵感的技术型设计师使用。无论是寻找具体的实现方案，还是把握宏观技术趋势，这里都能提供清晰、可靠且持续更新的“原材料”，助你在快速变化的 AI 浪潮中保持敏锐与高效。","# AI Notes\n\nnotes on AI state of the art, with a focus on generative and large language models. These are the \"raw materials\" for the https:\u002F\u002Flspace.swyx.io\u002F newsletter.\n\n> This repo used to be called https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fprompt-eng, but was renamed because [Prompt Engineering is Overhyped](https:\u002F\u002Ftwitter.com\u002Fswyx\u002Fstatus\u002F1596184757682941953). This is now an [AI Engineering](https:\u002F\u002Fwww.latent.space\u002Fp\u002Fai-engineer) notes repo.\n\nThis Readme is just the high level overview of the space; you should see the most updates in the OTHER markdown files in this repo:\n\n- `TEXT.md` - text generation, mostly with GPT-4\n\t- `TEXT_CHAT.md` - information on ChatGPT and competitors, as well as derivative products\n\t- `TEXT_SEARCH.md` - information on GPT-4 enabled semantic search and other info\n\t- `TEXT_PROMPTS.md` - a small [swipe file](https:\u002F\u002Fwww.swyx.io\u002Fswipe-files-strategy) of good GPT3 prompts\n- `INFRA.md` - raw notes on AI Infrastructure, Hardware and Scaling\n- `AUDIO.md` - tracking audio\u002Fmusic\u002Fvoice transcription + generation\n- `CODE.md` - codegen models, like Copilot\n- `IMAGE_GEN.md` - the most developed file, with the heaviest emphasis notes on Stable Diffusion, and some on midjourney and dalle.\n\t- `IMAGE_PROMPTS.md` - a small [swipe file](https:\u002F\u002Fwww.swyx.io\u002Fswipe-files-strategy) of good image prompts\n- **Resources**: standing, cleaned up resources that are meant to be permalinked to\n- **stub notes** - very small\u002Flightweight proto pages of future coverage areas\n\t\t  - `AGENTS.md` - tracking \"agentic AI\"\n- **blog ideas**- potential blog post ideas derived from these notes bc\n\n\u003C!-- START doctoc generated TOC please keep comment here to allow auto update -->\n\u003C!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->\n\u003Cdetails>\n\u003Csummary>Table of Contents\u003C\u002Fsummary>\n\n- [Motivational Use Cases](#motivational-use-cases)\n- [Top AI Reads](#top-ai-reads)\n- [Communities](#communities)\n- [People](#people)\n- [Misc](#misc)\n- [Quotes, Reality & Demotivation](#quotes-reality--demotivation)\n- [Legal, Ethics, and Privacy](#legal-ethics-and-privacy)\n\n\u003C\u002Fdetails>\n\u003C!-- END doctoc generated TOC please keep comment here to allow auto update -->\n\n## Motivational Use Cases\n\n- images\n  - https:\u002F\u002Fmpost.io\u002Fbest-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts\n  - [3D MRI synthetic brain images](https:\u002F\u002Ftwitter.com\u002FWarvito\u002Fstatus\u002F1570691960792580096?) - [positive reception from neuroimaging statistician](https:\u002F\u002Ftwitter.com\u002FdanCMDstat\u002Fstatus\u002F1572312699853312000?s=20&t=x-ouUbWA5n0-PxTGZcy2iA)\n  - [multiplayer stable diffusion](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhuggingface-projects\u002Fstable-diffusion-multiplayer?roomid=room-0)\n- video\n  - img2img of famous movie scenes ([lalaland](https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1565678995986911236))\n    - [img2img transforming actor](https:\u002F\u002Ftwitter.com\u002FLighthiserScott\u002Fstatus\u002F1567355079228887041?s=20&t=cBH4EGPC4r0Earm-mDbOKA) with ebsynth + koe_recast\n    - how ebsynth works https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1612047103806545923?s=20\n  - virtual fashion ([karenxcheng](https:\u002F\u002Ftwitter.com\u002Fkarenxcheng\u002Fstatus\u002F1564626773001719813))\n  - [seamless tiling images](https:\u002F\u002Ftwitter.com\u002Freplicatehq\u002Fstatus\u002F1568288903177859072?s=20&t=sRd3HRehPMcj1QfcOwDMKg)\n  - evolution of scenes ([xander](https:\u002F\u002Ftwitter.com\u002Fxsteenbrugge\u002Fstatus\u002F1558508866463219712))\n  - outpainting https:\u002F\u002Ftwitter.com\u002Forbamsterdam\u002Fstatus\u002F1568200010747068417?s=21&t=rliacnWOIjJMiS37s8qCCw\n  - webUI img2img collaboration https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1563582621757898752\n  - image to video with rotation https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1571096804539912192\n  - \"prompt paint\" https:\u002F\u002Ftwitter.com\u002F1littlecoder\u002Fstatus\u002F1572573152974372864\n  - audio2video animation of your face https:\u002F\u002Ftwitter.com\u002Fsiavashg\u002Fstatus\u002F1597588865665363969\n  - physical toys to 3d model + animation https:\u002F\u002Ftwitter.com\u002Fsergeyglkn\u002Fstatus\u002F1587430510988611584\n  - music videos \n    - [video killed the radio star](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WJaxFbdjm8c), [colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fdmarx\u002Fvideo-killed-the-radio-star\u002Fblob\u002Fmain\u002FVideo_Killed_The_Radio_Star_Defusion.ipynb) This uses OpenAI's Whisper speech-to-text, allowing you to take a YouTube video & create a Stable Diffusion animation prompted by the lyrics in the YouTube video\n    - [Stable Diffusion Videos](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fnateraw\u002Fstable-diffusion-videos\u002Fblob\u002Fmain\u002Fstable_diffusion_videos.ipynb) generates videos by interpolating between prompts and audio\n  - direct text2video project\n    - https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1575546841533497344\n    - https:\u002F\u002Fmakeavideo.studio\u002F - explorer https:\u002F\u002Fwebvid.datasette.io\u002Fwebvid\u002Fvideos\n    - https:\u002F\u002Fphenaki.video\u002F\n    - https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo\n    - https:\u002F\u002Fimagen.research.google\u002Fvideo\u002F\n- text-to-3d https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1575541930905243652\n  -  https:\u002F\u002Fdreamfusion3d.github.io\u002F\n  -  open source impl: https:\u002F\u002Fgithub.com\u002Fashawkey\u002Fstable-dreamfusion\n    - demo https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1578035919403503616\n-  text products\n\t- has a list of usecases at the end https:\u002F\u002Fhuyenchip.com\u002F2023\u002F04\u002F11\u002Fllm-engineering.html\n  - Jasper\n  - GPT for Obsidian https:\u002F\u002Freasonabledeviations.com\u002F2023\u002F02\u002F05\u002Fgpt-for-second-brain\u002F\n  - gpt3 email https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fgpt3-email and [email clustering](https:\u002F\u002Fgithub.com\u002Fdanielgross\u002Fembedland\u002Fblob\u002Fmain\u002Fbench.py#L281)\n  - gpt3() in google sheet [2020](https:\u002F\u002Ftwitter.com\u002Fpavtalk\u002Fstatus\u002F1285410751092416513?s=20&t=ppZhNO_OuQmXkjHQ7dl4wg), [2022](https:\u002F\u002Ftwitter.com\u002Fshubroski\u002Fstatus\u002F1587136794797244417) - [sheet](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1YzeQLG_JVqHKz5z4QE9wUsYbLoVZZxbGDnj7wCf_0QQ\u002Fedit) google sheets https:\u002F\u002Ftwitter.com\u002Fmehran__jalali\u002Fstatus\u002F1608159307513618433\n\t  - https:\u002F\u002Fgpt3demo.com\u002Fapps\u002Fgoogle-sheets\n\t  - Charm https:\u002F\u002Ftwitter.com\u002Fshubroski\u002Fstatus\u002F1620139262925754368?s=20\n  - https:\u002F\u002Fwww.summari.com\u002F Summari helps busy people read more\n- market maps\u002Flandscapes\n\t- elad gil 2024 [stack chart](https:\u002F\u002Fblog.eladgil.com\u002Fp\u002Fthings-i-dont-know-about-ai)\n\t- sequoia market map [jan 2023](https:\u002F\u002Ftwitter.com\u002Fsonyatweetybird\u002Fstatus\u002F1584580362339962880), [july 2023](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fllm-stack-perspective\u002F), [sep 2023](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fgenerative-ai-act-two\u002F)\n\t- base10 market map https:\u002F\u002Ftwitter.com\u002Fletsenhance_io\u002Fstatus\u002F1594826383305449491\n\t- matt shumer market map https:\u002F\u002Ftwitter.com\u002Fmattshumer_\u002Fstatus\u002F1620465468229451776 https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F1sewTBzRF087F6hFXiyeOIsGC1N4N3O7rYzijVexCgoQ\u002Fedit\n\t- nfx https:\u002F\u002Fwww.nfx.com\u002Fpost\u002Fgenerative-ai-tech-5-layers?ref=context-by-cohere\n\t- a16z https:\u002F\u002Fa16z.com\u002F2023\u002F01\u002F19\u002Fwho-owns-the-generative-ai-platform\u002F\n\t\t- https:\u002F\u002Fa16z.com\u002F2023\u002F06\u002F20\u002Femerging-architectures-for-llm-applications\u002F\n\t\t- https:\u002F\u002Fa16z.com\u002F100-gen-ai-apps\n\t- madrona https:\u002F\u002Fwww.madrona.com\u002Ffoundation-models\u002F\n\t- coatue\n\t\t- https:\u002F\u002Fwww.coatue.com\u002Fblog\u002Fperspective\u002Fai-the-coming-revolution-2023\n\t\t- https:\u002F\u002Fx.com\u002FSam_Awrabi\u002Fstatus\u002F1742324900034150646?s=20\n- game assets - \n\t- emad thread https:\u002F\u002Ftwitter.com\u002FEMostaque\u002Fstatus\u002F1591436813750906882\n\t- scenario.gg https:\u002F\u002Ftwitter.com\u002Femmanuel_2m\u002Fstatus\u002F1593356241283125251\n\t- [3d game character modeling example](https:\u002F\u002Fwww.traffickinggame.com\u002Fai-assisted-graphics\u002F)\n\t- MarioGPT https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05981.pdf https:\u002F\u002Fwww.slashgear.com\u002F1199870\u002Fmariogpt-uses-ai-to-generate-endless-super-mario-levels-for-free\u002F https:\u002F\u002Fgithub.com\u002Fshyamsn97\u002Fmario-gpt\u002Fblob\u002Fmain\u002Fmario_gpt\u002Flevel.py\n\t- https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=36295227\n\n## Top AI Reads\n\nThe more advanced GPT3 reads have been split out to https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fai-notes\u002Fblob\u002Fmain\u002FTEXT.md\n\n- https:\u002F\u002Fwww.gwern.net\u002FGPT-3#prompts-as-programming\n- https:\u002F\u002Flearnprompting.org\u002F\n\n### Beginner Reads\n\n  - [Karpathy 2025 Intro to LLMs](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7xTGNNLPyMI) ([summary](https:\u002F\u002Fanfalmushtaq.com\u002Farticles\u002Fdeep-dive-into-llms-like-chatgpt-tldr))\n  - [Bill Gates on AI](https:\u002F\u002Fwww.gatesnotes.com\u002FThe-Age-of-AI-Has-Begun) ([tweet](https:\u002F\u002Ftwitter.com\u002Fgdb\u002Fstatus\u002F1638310597325365251?s=20))\n\t  - \"The development of AI is as fundamental as the creation of the microprocessor, the personal computer, the Internet, and the mobile phone. It will change the way people work, learn, travel, get health care, and communicate with each other.\"\n  - [Steve Yegge on AI for developers](https:\u002F\u002Fabout.sourcegraph.com\u002Fblog\u002Fcheating-is-all-you-need)\n  - [Karpathy 2023 intro to LLMs](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zjkBMFhNj_g) (notes from [Sarah Chieng](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1729569057475879103))\n  - [Prompt Engineering guide from OpenAI at NeurIPS](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1741926266087870784) via Sarah Chieng\n  - [Why this AI moment might be the real deal](https:\u002F\u002Fwww.thenewatlantis.com\u002Fpublications\u002Fwhy-this-ai-moment-may-be-the-real-deal)\n  - Sam Altman - [Moore's Law for Everything](https:\u002F\u002Fmoores.samaltman.com\u002F)\n  - excellent introduction to foundation models from MSR https:\u002F\u002Fyoutu.be\u002FHQI6O5DlyFc\n  - openAI prompt tutorial https:\u002F\u002Fbeta.openai.com\u002Fdocs\u002Fquickstart\u002Fadd-some-examples\n  - google LAMDA intro https:\u002F\u002Faitestkitchen.withgoogle.com\u002Fhow-lamda-works\n  - karpathy gradient descent course\n  - FT visual storytelling on \"[how transformers work](https:\u002F\u002Fig.ft.com\u002Fgenerative-ai\u002F)\"\n  - DALLE2 prompt writing book http:\u002F\u002Fdallery.gallery\u002Fwp-content\u002Fuploads\u002F2022\u002F07\u002FThe-DALL%C2%B7E-2-prompt-book-v1.02.pdf\n  - https:\u002F\u002Fmedium.com\u002Fnerd-for-tech\u002Fprompt-engineering-the-career-of-future-2fb93f90f117\n  - [How to use AI to do stuff](https:\u002F\u002Fwww.oneusefulthing.org\u002Fp\u002Fhow-to-use-ai-to-do-stuff-an-opinionated) across getting information, working with data, and making images\n  - https:\u002F\u002Fourworldindata.org\u002Fbrief-history-of-ai ai progress overview with nice charts\n  - Jon Stokes' [AI Content Generation, Part 1: Machine Learning Basics](https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fai-content-generation-part-1-machine)\n  - [Andrew Ng - Opportunities in AI](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5p248yoa3oE)\n  - [What are transformer models and how do they work?](https:\u002F\u002Ftxt.cohere.ai\u002Fwhat-are-transformer-models\u002F) - maybe [a bit too high level](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35577138)\n  - text generation\n\t  - humanloop's [prompt engineering 101](https:\u002F\u002Fwebsite-olo3k29b2-humanloopml.vercel.app\u002Fblog\u002Fprompt-engineering-101)\n\t  - Stephen Wolfram's explanations https:\u002F\u002Fwritings.stephenwolfram.com\u002F2023\u002F02\u002Fwhat-is-chatgpt-doing-and-why-does-it-work\u002F\n\t  - equivalent from jon stokes jonstokes.com\u002Fp\u002Fthe-chat-stack-gpt-4-and-the-near\n\t  - https:\u002F\u002Fandymatuschak.org\u002Fprompts\u002F\n\t  - cohere's LLM university https:\u002F\u002Fdocs.cohere.com\u002Fdocs\u002Fllmu \n\t\t  - Jay alammar's guide to all the things: https:\u002F\u002Fllm.university\u002F\n\t  - https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fchatgpt-explained-a-guide-for-normies for normies\n  - image generation\n\t  - https:\u002F\u002Fwiki.installgentoo.com\u002Fwiki\u002FStable_Diffusion overview\n\t  - https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx41n87\u002Fhow_to_get_images_that_dont_suck_a\u002F\n\t  - https:\u002F\u002Fmpost.io\u002Fbest-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts\u002F\n\t  - https:\u002F\u002Fwww.kdnuggets.com\u002F2021\u002F03\u002Fbeginners-guide-clip-model.html \n\t  - https:\u002F\u002Fwww.seangoedecke.com\u002Fdiffusion-models-explained\u002F\n  - for nontechnical\n    - https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fai-content-generation-part-1-machine\n    - https:\u002F\u002Fwww.protocol.com\u002Fgenerative-ai-startup-landscape-map\n    - https:\u002F\u002Ftwitter.com\u002Fsaranormous\u002Fstatus\u002F1572791179636518913\n\n### Intermediate Reads\n\n  - **State of AI Report**: [2018](https:\u002F\u002Fwww.stateof.ai\u002F2018), [2019](https:\u002F\u002Fwww.stateof.ai\u002F2019), [2020](https:\u002F\u002Fwww.stateof.ai\u002F2020), [2021](https:\u002F\u002Fwww.stateof.ai\u002F2021), [2022](https:\u002F\u002Fwww.stateof.ai\u002F)\n  - reverse chronological major events https:\u002F\u002Fbleedingedge.ai\u002F\n  - [What we Know about LLMs](https:\u002F\u002Fwillthompson.name\u002Fwhat-we-know-about-llms-primer#block-920907dc37394adcac5bf4e7318adc10) - great recap of research\n  - [Karpathy's 1hr guide to LLMs](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zjkBMFhNj_g) - summary [from Sarah Chieng](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1729569057475879103)\n\t  - 1.  What is a large language model (LLM)?\n\t\t  - There are two main components of an LLM\n\t\t    -   What does an LLM do?\n\t1.  How do you create an LLM?\n\t    -   Stage 1: Model Pre-Training\n\t    -   Stage 2: Model Fine-tuning\n\t        -   Stage 2b: [Optional] Additional Fine-tuning\n\t    -   Stage 3: Model Inference\n\t    -   Stage 4: [Optional] Supercharging LLMs with Customization\n\t1.  The Current LLM “Leaderboard”\n\t2.  The Future of LLMs: What’s Next?\n\t    -   How to improve LLM performance?\n\t        -   LLM Scaling Laws\n\t        -   Self-Improvement\n\t    -   How to improve LLM abilities?\n\t        -   Multimodality\n\t        -   System 1 + 2 Thinking\n\t1.  The LLM Dark Arts\n\t    -   Jailbreaking\n\t    -   Prompt Injecting\n\t    -   Data Poisoning & Backdoor Attacks\n\t- [Evan Morikawa guide to LLM math](https:\u002F\u002Fnewsletter.pragmaticengineer.com\u002Fp\u002Fscaling-chatgpt) especially the 5 scaling challenges piece\n  -  [A Hacker's Guide to Language Models](https:\u002F\u002Ftwitter.com\u002Fjeremyphoward\u002Fstatus\u002F1705883362991472984?s=20)  ([youtube](https:\u002F\u002Fyoutu.be\u002FjkrNMKz9pWU?si=BNz-v6VmdbX7QDtr)) Jeremy Howard's 90min complete overview of LLM learnings - starting at the basics: the 3-step pre-training \u002F fine-tuning \u002F classifier ULMFiT approach used in all modern LLMs.\n  - https:\u002F\u002Fspreadsheets-are-all-you-need.ai\n  - [\"Catching up on the weird world of LLMs\"](https:\u002F\u002Fsimonwillison.net\u002F2023\u002FAug\u002F3\u002Fweird-world-of-llms\u002F) - Simon Willison's 40min overview + [Open Questions for AI Engineers](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AjLVoAu-u-Q)\n  - [LLMs overview from Flyte](https:\u002F\u002Fflyte.org\u002Fblog\u002Fgetting-started-with-large-language-models-key-things-to-know#what-are-llms)\n  - Clementine Fourrier on [How Evals are Done](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fclefourrier\u002Fllm-evaluation)\n  - [VLMs Zero to Hero](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Fvlms-zero-to-hero) ([tweet](https:\u002F\u002Fx.com\u002Fskalskip92\u002Fstatus\u002F1871247056343322624\u002Fphoto\u002F1))\n  - [Patterns for building LLM-based systems and products](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F) - great recap\n\t  - [Evals](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#evals-to-measure-performance): To measure performance\n\t-   [RAG](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#retrieval-augmented-generation-to-add-knowledge): To add recent, external knowledge\n\t-   [Fine-tuning](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#fine-tuning-to-get-better-at-specific-tasks): To get better at specific tasks\n\t-   [Caching](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#caching-to-reduce-latency-and-cost): To reduce latency & cost\n\t-   [Guardrails](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#guardrails-to-ensure-output-quality): To ensure output quality\n\t-   [Defensive UX](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#defensive-ux-to-anticipate--handle-errors-gracefully): To anticipate & manage errors gracefully\n\t-   [Collect user feedback](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#collect-user-feedback-to-build-our-data-flywheel): To build our data flywheel\n  - [Vector Databases: A Technical Primer [pdf]](https:\u002F\u002Ftge-data-web.nyc3.digitaloceanspaces.com\u002Fdocs\u002FVector%20Databases%20-%20A%20Technical%20Primer.pdf) very nice slides on Vector DBs\n\t  - Missing coverage of hybrid search (vector + lexical). [Further discussions](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=38971221)\n  - [Visual introduction to ML](http:\u002F\u002Fwww.r2d3.us\u002Fvisual-intro-to-machine-learning-part-1\u002F)\n  - A16z AI Canon https:\u002F\u002Fa16z.com\u002F2023\u002F05\u002F25\u002Fai-canon\u002F\n\t  -  **[Software 2.0](https:\u002F\u002Fkarpathy.medium.com\u002Fsoftware-2-0-a64152b37c35)**: Andrej Karpathy was one of the first to clearly explain (in 2017!) why the new AI wave really matters. His argument is that AI is a new and powerful way to program computers. As LLMs have improved rapidly, this thesis has proven prescient, and it gives a good mental model for how the AI market may progress.\n\t-   **[State of GPT](https:\u002F\u002Fbuild.microsoft.com\u002Fen-US\u002Fsessions\u002Fdb3f4859-cd30-4445-a0cd-553c3304f8e2)**: Also from Karpathy, this is a very approachable explanation of how ChatGPT \u002F GPT models in general work, how to use them, and what directions R&D may take.\n\t-   [**What is ChatGPT doing … and why does it work?**](https:\u002F\u002Fwritings.stephenwolfram.com\u002F2023\u002F02\u002Fwhat-is-chatgpt-doing-and-why-does-it-work\u002F): Computer scientist and entrepreneur Stephen Wolfram gives a long but highly readable explanation, from first principles, of how modern AI models work. He follows the timeline from early neural nets to today’s LLMs and ChatGPT.\n\t-   **[Transformers, explained](https:\u002F\u002Fdaleonai.com\u002Ftransformers-explained)**: This post by Dale Markowitz is a shorter, more direct answer to the question “what is an LLM, and how does it work?” This is a great way to ease into the topic and develop intuition for the technology. It was written about GPT-3 but still applies to newer models.\n\t-   **[How Stable Diffusion works](https:\u002F\u002Fmccormickml.com\u002F2022\u002F12\u002F21\u002Fhow-stable-diffusion-works\u002F)**: This is the computer vision analogue to the last post. Chris McCormick gives a layperson’s explanation of how Stable Diffusion works and develops intuition around text-to-image models generally. For an even _gentler_ introduction, check out this [comic](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fzs5dk5\u002Fi_made_an_infographic_to_explain_how_stable\u002F) from r\u002FStableDiffusion.\n\t\t- (2025) 3blue1brown on [how diffusion works](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iv-5mZ_9CPY)\n\t- Explainers\n\t\t-   [**Deep learning in a nutshell: core concepts**](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fdeep-learning-nutshell-core-concepts\u002F): This four-part series from Nvidia walks through the basics of deep learning as practiced in 2015, and is a good resource for anyone just learning about AI.\n\t\t-   **[Practical deep learning for coders](https:\u002F\u002Fcourse.fast.ai\u002F)**: Comprehensive, free course on the fundamentals of AI, explained through practical examples and code.\n\t\t-   **[Word2vec explained](https:\u002F\u002Ftowardsdatascience.com\u002Fword2vec-explained-49c52b4ccb71)**: Easy introduction to embeddings and tokens, which are building blocks of LLMs (and all language models).\n\t\t\t- https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=44708028\n\t\t-   **[Yes you should understand backprop](https:\u002F\u002Fkarpathy.medium.com\u002Fyes-you-should-understand-backprop-e2f06eab496b)**: More in-depth post on back-propagation if you want to understand the details. If you want even more, try the [Stanford CS231n lecture](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=i94OvYb6noo) ([course here](http:\u002F\u002Fcs231n.stanford.edu\u002F2016\u002F)) on Youtube.\n\t- Courses\n\t\t-   **[Stanford CS229](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU)**: Introduction to Machine Learning with Andrew Ng, covering the fundamentals of machine learning.\n\t\t-   **[Stanford CS224N](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rOSH4v6133s9LFPRHjEmbmJ)**: NLP with Deep Learning with Chris Manning, covering NLP basics through the first generation of LLMs.\n  - https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course\n  - https:\u002F\u002Fcims.nyu.edu\u002F~sbowman\u002Feightthings.pdf\n\t  1. LLMs predictably get more capable with increasing investment, even without targeted innovation. \n\t  2. Many important LLM behaviors emerge unpredictably as a byproduct of increasing investment. \n\t  3. LLMs often appear to learn and use representations of the outside world. \n\t  4. There are no reliable techniques for steering the behavior of LLMs. \n\t  5. Experts are not yet able to interpret the inner workings of LLMs. \n\t  6. Human performance on a task isn’t an upper bound on LLM performance. \n\t  7. LLMs need not express the values of their creators nor the values encoded in web text. \n\t  8. Brief interactions with LLMs are often misleading.\n\t  9. simonw highlights https:\u002F\u002Ffedi.simonwillison.net\u002F@simon\u002F110144185463887790\n  - 10 open challenges in LLM research https:\u002F\u002Fhuyenchip.com\u002F2023\u002F08\u002F16\u002Fllm-research-open-challenges.html\n  - openai prompt eng cookbook https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-cookbook\u002Fblob\u002Fmain\u002Ftechniques_to_improve_reliability.md\n  - on prompt eng overview https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-03-15-prompt-engineering\u002F\n  - https:\u002F\u002Fmoultano.wordpress.com\u002F2023\u002F06\u002F28\u002Fthe-many-ways-that-digital-minds-can-know\u002F comparing search vs ai\n  - Recap of 2022's major AI developments https:\u002F\u002Fwww.deeplearning.ai\u002Fthe-batch\u002Fissue-176\u002F\n  - DALLE2 asset generation + inpainting https:\u002F\u002Ftwitter.com\u002Faifunhouse\u002Fstatus\u002F1576202480936886273?s=20&t=5EXa1uYDPVa2SjZM-SxhCQ\n  - suhail journey https:\u002F\u002Ftwitter.com\u002FSuhail\u002Fstatus\u002F1541276314485018625?s=20&t=X2MVKQKhDR28iz3VZEEO8w\n  - composable diffusion - \"AND\" instead of \"and\" https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1580293860902985728\n  - on BPE tokenization https:\u002F\u002Ftowardsdatascience.com\u002Fbyte-pair-encoding-subword-based-tokenization-algorithm-77828a70bee0 see also google sentencepiece and openai tiktoken\n\t  - see visualization here https:\u002F\u002Flucalp.dev\u002Fbitter-lesson-tokenization-and-blt\u002F\n\t  - source in GPT2 source https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgpt-2\u002Fblob\u002Fmaster\u002Fsrc\u002Fencoder.py\n\t  - note that BPEs are suboptimal https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FdFbfCLZA4pejckeKc\u002Fa-mechanistic-explanation-for-solidgoldmagikarp-like-tokens?commentId=9jNdKscwEWBB4GTCQ\n\t\t  - [\u002F\u002F---------------------------------------------------------------------------------------------------------------- is a single GPT-4 token](https:\u002F\u002Ftwitter.com\u002Fgoodside\u002Fstatus\u002F1753192905844592989)\n\t\t  - [GPT-3.5 crashes when it thinks about useRalativeImagePath too much](https:\u002F\u002Fiter.ca\u002Fpost\u002Fgpt-crash\u002F)\n\t\t  - causes math and string character issues https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35363769\n\t\t  - and cause [issues with evals](https:\u002F\u002Fx.com\u002Fmain_horse\u002Fstatus\u002F1744560083957411845?s=20)\n\t\t  - [glitch tokens](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=39086318) happen when tokenizer has different dataset than LLM\n\t\t  - [karpathy talking about why tokenization is messy](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zduSFxRajkE)\n\t  - https:\u002F\u002Fplatform.openai.com\u002Ftokenizer and https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftiktoken (more up to date: https:\u002F\u002Ftiktokenizer.vercel.app\u002F)\n\t  - Wordpiece -> BPE -> SentenceTransformer\n\t\t  -  [Preliminary reading on Embeddings](https:\u002F\u002Ftowardsdatascience.com\u002Fneural-network-embeddings-explained-4d028e6f0526?gi=ee46baab0d8f)\n\t\t  - https:\u002F\u002Fyoutu.be\u002FQdDoFfkVkcw?si=qefZSDDSpxDNd313\n\t\t-   [Huggingface MTEB Benchmark of a bunch of Embeddings](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmteb)\n\t\t-   [notable issues with GPT3 Embeddings](https:\u002F\u002Ftwitter.com\u002FNils_Reimers\u002Fstatus\u002F1487014195568775173) and alternatives to consider\n\t  - https:\u002F\u002Fobservablehq.com\u002F@simonw\u002Fgpt-3-token-encoder-decoder\n\t  - karpathy wants tokenization to go away https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1657949234535211009\n\t  - positional encoding not needed for decoder only https:\u002F\u002Ftwitter.com\u002Fa_kazemnejad\u002Fstatus\u002F1664277559968927744?s=20\n  - creates its own language https:\u002F\u002Ftwitter.com\u002Fgiannis_daras\u002Fstatus\u002F1531693104821985280\n  - Google Cloud Generative AI Learning Path https:\u002F\u002Fwww.cloudskillsboost.google\u002Fpaths\u002F118\n  - img2img https:\u002F\u002Fandys.page\u002Fposts\u002Fhow-to-draw\u002F\n  - on language modeling https:\u002F\u002Flena-voita.github.io\u002Fnlp_course\u002Flanguage_modeling.html and approachable but technical explanation of language generation including sampling from distributions and some mechanistic intepretability (finding neuron that tracks quote state)\n  - quest for photorealism https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx9zmjd\u002Fquest_for_ultimate_photorealism_part_2_colors\u002F\n    - https:\u002F\u002Fmedium.com\u002Fmerzazine\u002Fprompt-design-for-dall-e-photorealism-emulating-reality-6f478df6f186\n  - settings tweaking https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx3k79h\u002Fthe_feeling_of_discovery_sd_is_like_a_great_proc\u002F\n    - seed selection https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx8szj9\u002Ftutorial_seed_selection_and_the_impact_on_your\u002F\n    - minor parameter parameter difference study (steps, clamp_max, ETA, cutn_batches, etc) https:\u002F\u002Ftwitter.com\u002FKyrickYoung\u002Fstatus\u002F1500196286930292742\n    - Generative AI: Autocomplete for everything https:\u002F\u002Fnoahpinion.substack.com\u002Fp\u002Fgenerative-ai-autocomplete-for-everything?sd=pf\n    - [How does GPT Obtain its Ability? Tracing Emergent Abilities of Language Models to their Sources](https:\u002F\u002Fyaofu.notion.site\u002FHow-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1)  good paper with the development history of the GPT family of models and how the capabilities developed\n- https:\u002F\u002Fbarryz-architecture-of-agentic-llm.notion.site\u002FAlmost-Everything-I-know-about-LLMs-d117ca25d4624199be07e9b0ab356a77\n\n### Advanced Reads\n\n- https:\u002F\u002Fgithub.com\u002FMooler0410\u002FLLMsPracticalGuide\n\t- good curated list of all the impt papers\n- https:\u002F\u002Fgithub.com\u002FeleutherAI\u002Fcookbook#the-cookbook Eleuther AI's list of resources for training. compare to https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Ftuning_playbook\n- anti hype LLM reading list https:\u002F\u002Fgist.github.com\u002Fveekaybee\u002Fbe375ab33085102f9027853128dc5f0e\n- [6 papers from Jason Wei of OpenAI](https:\u002F\u002Ftwitter.com\u002F_jasonwei\u002Fstatus\u002F1729585618311950445) ([blog](https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Fsome-intuitions-about-large-language-models))\n\t- GPT-3 paper (https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.14165)\n\t- chain-of-thought prompting (https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)\n\t- scaling laws, (https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08361)\n\t- emergent abilities (https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07682)\n\t- language models can follow both flipped labels and semantically-unrelated labels (https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03846)\n - [LLM Paper Notes](https:\u002F\u002Fgithub.com\u002Feugeneyan\u002Fllm-paper-notes) - notes from the [Latent Space paper club](https:\u002F\u002Fwww.latent.space\u002Fabout#%C2%A7components) by [Eugene Yan](https:\u002F\u002Feugeneyan.com\u002F)\n - [CMU LLM syllabus](https:\u002F\u002Fllmsystem.github.io\u002Fllmsystem2024spring\u002Fdocs\u002FSyllabus)\n - explaining [batching in inference](https:\u002F\u002Fwww.seangoedecke.com\u002Finference-batching-and-deepseek\u002F) relevant to DeepSeek R1\n- Transformers from scratch https:\u002F\u002Fe2eml.school\u002Ftransformers.html\n\t- transformers vs LSTM https:\u002F\u002Fmedium.com\u002Fanalytics-vidhya\u002Fwhy-are-lstms-struggling-to-matchup-with-transformers-a1cc5b2557e3\n\t- transformer code walkthru https:\u002F\u002Ftwitter.com\u002Fmark_riedl\u002Fstatus\u002F1555188022534176768\n\t- transformer familyi https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-01-27-the-transformer-family-v2\u002F\n\t\t- carmack paper list https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=34639634\n\t\t- Transformer models: an introduction and catalog https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.07730\n\t\t- Deepmind - formal algorithms for transformers https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.09238.pdf\n\t- Jay Alammar explainers\n\t\t- https:\u002F\u002Fjalammar.github.io\u002Fillustrated-transformer\u002F\n\t\t- https:\u002F\u002Fjalammar.github.io\u002Fvisualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention\u002F\n- karpathy on transformers\n\t- **Convergence**: The ongoing consolidation in AI is incredible. When I started ~decade ago vision, speech, natural language, reinforcement learning, etc. were completely separate; You couldn't read papers across areas - the approaches were completely different, often not even ML based. In 2010s all of these areas started to transition 1) to machine learning and specifically 2) neural nets. The architectures were diverse but at least the papers started to read more similar, all of them utilizing large datasets and optimizing neural nets. But as of approx. last two years, even the neural net architectures across all areas are starting to look identical - a Transformer (definable in ~200 lines of PyTorch [https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FminGPT\u002Fblob\u002Fmaster\u002Fmingpt\u002Fmodel.py…](https:\u002F\u002Ft.co\u002FxQL5NyJkLE)), with very minor differences. Either as a strong baseline or (often) state of the art. ([tweetstorm](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1468370605229547522?s=20))\n\t- **Why Transformers won**: The Transformer is a magnificient neural network architecture because it is a general-purpose differentiable computer. It is simultaneously: 1) expressive (in the forward pass) 2) optimizable (via backpropagation+gradient descent) 3) efficient (high parallelism compute graph) [tweetstorm](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1582807367988654081)\n\t\t- https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1593417989830848512?s=20\n\t\t- elaborated in [1hr stanford lecture](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=XfpMkf4rD6E) and [8min lex fridman summary](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9uw3F6rndnA)\n\t- [BabyGPT](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1645115622517542913) with two tokens 0\u002F1 and context length of 3, viewing it as a finite state markov chain. It was trained on the sequence \"111101111011110\" for 50 iterations. The parameters and the architecture of the Transformer modifies the probabilities on the arrows.\n\t- Build GPT from scratch https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kCc8FmEb1nY\n\t- different GPT from scratch in 60 LOC  https:\u002F\u002Fjaykmody.com\u002Fblog\u002Fgpt-from-scratch\u002F\n- [Diffusion models from scratch, from a new theoretical perspective](https:\u002F\u002Fwww.chenyang.co\u002Fdiffusion.html) - code driven intro of diffusion models\n- [137 emergent abilities of large language models](https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Femergence)\n\t- Emergent few-shot prompted tasks: BIG-Bench and MMLU benchmarks\n\t- Emergent prompting strategies\n\t\t- [Instruction-following](https:\u002F\u002Fopenreview.net\u002Fforum?id=gEZrGCozdqR)\n\t\t- [Scratchpad](https:\u002F\u002Fopenreview.net\u002Fforum?id=iedYJm92o0a)\n\t\t- [Using open-book knowledge for fact checking](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.11446)\n\t\t- [Chain-of-thought prompting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)\n\t\t- [Differentiable search index](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06991)\n\t\t- [Self-consistency](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11171)\n\t\t- [Leveraging explanations in prompting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02329)\n\t\t- [Least-to-most prompting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10625)\n\t\t- [Zero-shot chain-of-thought](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11916)\n\t\t- [Calibration via P(True)](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05221)\n\t\t- [Multilingual chain-of-thought](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03057)\n\t\t- [Ask-me-anything prompting](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02441)\n\t- some pushback - are they a mirage? just dont use harsh metrics\n\t\t- https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Fcommon-arguments-regarding-emergent-abilities\n\t\t- https:\u002F\u002Fhai.stanford.edu\u002Fnews\u002Fais-ostensible-emergent-abilities-are-mirage\n  - Images\n\t  - Eugene Yan explanation of the Text to Image stack https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Ftext-to-image\u002F\n\t  - VQGAN\u002FCLIP https:\u002F\u002Fminimaxir.com\u002F2021\u002F08\u002Fvqgan-clip\u002F\n\t  - 10 years of Image generation history https:\u002F\u002Fzentralwerkstatt.org\u002Fblog\u002Ften-years-of-image-synthesis\n\t  - Vision Transformers (ViT) Explained https:\u002F\u002Fwww.pinecone.io\u002Flearn\u002Fvision-transformers\u002F\n  - negative prompting https:\u002F\u002Fminimaxir.com\u002F2022\u002F11\u002Fstable-diffusion-negative-prompt\u002F\n  - best papers of 2022 https:\u002F\u002Fwww.yitay.net\u002Fblog\u002F2022-best-nlp-papers\n  - [Predictability and Surprise in Large Generative Models](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.07785.pdf) - good survey paper of what we know about scaling and capabilities and rise of LLMs so far\n- more prompt eng papers https:\u002F\u002Fgithub.com\u002Fdair-ai\u002FPrompt-Engineering-Guide\n- https:\u002F\u002Fcreator.nightcafe.studio\u002Fvqgan-clip-keyword-modifier-comparison VQGAN+CLIP Keyword Modifier Comparison\n- History of Transformers\n\t- richard socher on their contribution to attention mechanism leading up to transformers https:\u002F\u002Fovercast.fm\u002F+r1P4nKfFU\u002F1:00:00\n\t- https:\u002F\u002Fkipp.ly\u002Fblog\u002Ftransformer-taxonomy\u002F This document is my running literature review for people trying to catch up on AI. It covers 22 models, 11 architectural changes, 7 post-pre-training techniques and 3 training techniques (and 5 things that are none of the above)\n\t- [Understanding Large Language Models A Cross-Section of the Most Relevant Literature To Get Up to Speed](https:\u002F\u002Fmagazine.sebastianraschka.com\u002Fp\u002Funderstanding-large-language-models)\n\t\t- giving credit to Bandanau et al (2014), which I believe first proposed the concept of applying a Softmax function over token scores to compute attention, setting the stage for the original transformer by Vaswani et al (2017). https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35589756\n\t- https:\u002F\u002Ffinbarrtimbers.substack.com\u002Fp\u002Ffive-years-of-progress-in-gpts GPT1\u002F2\u002F3, Megatron, Gopher, Chinchilla, PaLM, LLaMa\n\t- good summary paper (8 things to know) https:\u002F\u002Fcims.nyu.edu\u002F~sbowman\u002Feightthings.pdf\n- [Huggingface MOE explainer](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmoe)\n- https:\u002F\u002Fblog.alexalemi.com\u002Fkl-is-all-you-need.html\n\n\n\nWe compared 126 keyword modifiers with the same prompt and initial image. These are the results.\n  - https:\u002F\u002Fcreator.nightcafe.studio\u002Fcollection\u002F8dMYgKm1eVXG7z9pV23W\n- Google released PartiPrompts as a benchmark: https:\u002F\u002Fparti.research.google\u002F \"PartiPrompts (P2) is a rich set of over 1600 prompts in English that we release as part of this work. P2 can be used to measure model capabilities across various categories and challenge aspects.\"\n- Video tutorials\n  - Pixel art https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=UvJkQPtr-8s&feature=youtu.be\n- History of papers\n\t- 2008: Unified Architecture for NLP (Collobert-Weston) https:\u002F\u002Ftwitter.com\u002Fylecun\u002Fstatus\u002F1611921657802768384\n\t- 2015: [Semi-supervised sequence learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.01432) https:\u002F\u002Ftwitter.com\u002Fdeliprao\u002Fstatus\u002F1611896130589057025?s=20\n\t- 2017: Transformers (Vaswani et al)\n\t- 2018: GPT (Radford et al)\n\t- \n- Misc\n  - StabilityAI CIO perspective https:\u002F\u002Fdanieljeffries.substack.com\u002Fp\u002Fthe-turning-point-for-truly-open?sd=pf\n  - https:\u002F\u002Fgithub.com\u002Fawesome-stable-diffusion\u002Fawesome-stable-diffusion\n  - https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLMOps guide to msft prompt research\n  - gwern's behind the scenes discussion of Bing, GPT4, and the Microsoft-OpenAI relationship https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FjtoPawEhLNXNxvgTT\u002Fbing-chat-is-blatantly-aggressively-misaligned\n\n### other lists like this\n\n- https:\u002F\u002Fgist.github.com\u002Frain-1\u002Feebd5e5eb2784feecf450324e3341c8d\n- https:\u002F\u002Fgithub.com\u002Funderlines\u002Fawesome-marketing-datascience\u002Fblob\u002Fmaster\u002Fawesome-ai.md#llama-models\n- https:\u002F\u002Fgithub.com\u002Fimaurer\u002Fawesome-decentralized-llm\n\n## Communities\n\n- Discords (see https:\u002F\u002Fbuttondown.email\u002Fainews for daily email recaps, updated live)\n\t- [Latent Space Discord](https:\u002F\u002Fdiscord.gg\u002FxJJMRaWCRt) (ours!)\n\t- General hacking and learning\n\t\t- [ChatGPT Hackers Discord](https:\u002F\u002Fwww.chatgpthackers.dev\u002F)\n\t\t- [Alignment Lab AI Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fk36qjUxyJC)\n\t\t- [Nous Research Discord]([https:\u002F\u002Fdiscord.gg\u002FT3kTZfYzs6](https:\u002F\u002Ft.co\u002FD3omqAxP04))\n\t\t- [DiscoLM Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FvGRFMnS6c2)\n\t\t- [Karpathy Discord](https:\u002F\u002Fdiscord.gg\u002F3zy8kqD9Cp) (inactive)\n\t\t- [HuggingFace Discord](https:\u002F\u002Fdiscuss.huggingface.co\u002Ft\u002Fjoin-the-hugging-face-discord\u002F11263)\n\t\t- [Skunkworks AI Discord](https:\u002F\u002Fdiscord.gg\u002F3Sfmpd3Njt) (new)\n\t\t- [Jeff Wang\u002FLLM Perf enthusiasts discord](https:\u002F\u002Ftwitter.com\u002Fwangzjeff)\n\t\t- [CUDA Mode (Mark Saroufim)](https:\u002F\u002Fdiscord.com\u002Finvite\u002FWu4pdW8QqM) see [Youtube](https:\u002F\u002Fwww.youtube.com\u002F@CUDAMODE) and [GitHub](https:\u002F\u002Fgithub.com\u002Fcuda-mode)\n\t- Art\n\t\t- [StableDiffusion Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fstablediffusion) \n\t\t- Deforum Discord https:\u002F\u002Fdiscord.gg\u002FupmXXsrwZc\n\t\t- Lexica Discord https:\u002F\u002Fdiscord.com\u002Finvite\u002FbMHBjJ9wRh\n\t- AI research\n\t\t- LAION discord https:\u002F\u002Fdiscord.gg\u002FxBPBXfcFHd\n\t\t- Eleuther discord: https:\u002F\u002Fwww.eleuther.ai\u002Fget-involved\u002F ([primer](https:\u002F\u002Fblog.eleuther.ai\u002Fyear-one\u002F))\n\t- Various startups\n\t\t- Perplexity Discord https:\u002F\u002Fdiscord.com\u002Finvite\u002FkWJZsxPDuX\n\t\t- Midjourney's discord\n\t\t  - how to use midjourney v4 https:\u002F\u002Ftwitter.com\u002Ffabianstelzer\u002Fstatus\u002F1588856386540417024?s=20&t=PlgLuGAEEds9HwfegVRrpg\n- https:\u002F\u002Fstablehorde.net\u002F\n\t- Agents\n\t\t- AutoGPT discord\n\t\t- BabyAGI discord\n- Reddit\n\t- https:\u002F\u002Freddit.com\u002Fr\u002FstableDiffusion\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002FLocalLLaMA\u002F\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002Fbing\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002Fopenai\n\n\n## People\n\n> *Unknown to many people, a growing amount of alpha is now outside of Arxiv, sources include but are not limited to: https:\u002F\u002Fgithub.com\u002Ftrending, HN, that niche Discord server, anime profile picture anons on X, reddit *- [K](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1733968385472704548)\n\nThis list will be out of date but will get you started. My live list of people to follow is at: https:\u002F\u002Ftwitter.com\u002Fi\u002Flists\u002F1585430245762441216\n\n- Researchers\u002FDevelopers\n  - https:\u002F\u002Ftwitter.com\u002F_jasonwei\n  - https:\u002F\u002Ftwitter.com\u002Fjohnowhitaker\u002Fstatus\u002F1565710033463156739\n  - https:\u002F\u002Ftwitter.com\u002Faltryne\u002Fstatus\u002F1564671546341425157\n  - https:\u002F\u002Ftwitter.com\u002FSchmidhuberAI\n  - https:\u002F\u002Ftwitter.com\u002Fnearcyan\n  - https:\u002F\u002Ftwitter.com\u002Fkarinanguyen_\n  - https:\u002F\u002Ftwitter.com\u002Fabhi_venigalla\n  - https:\u002F\u002Ftwitter.com\u002Fadvadnoun\n  - https:\u002F\u002Ftwitter.com\u002Fpolynoamial\n  - https:\u002F\u002Ftwitter.com\u002Fvovahimself\n  - https:\u002F\u002Ftwitter.com\u002Fsarahookr\n  - https:\u002F\u002Ftwitter.com\u002FshaneguML\n  - https:\u002F\u002Ftwitter.com\u002FMaartenSap\n  - https:\u002F\u002Ftwitter.com\u002FethanCaballero\n  - https:\u002F\u002Ftwitter.com\u002FShayneRedford\n  - https:\u002F\u002Ftwitter.com\u002Fseb_ruder\n  - https:\u002F\u002Ftwitter.com\u002Frasbt\n  - https:\u002F\u002Ftwitter.com\u002Fwightmanr\n  - https:\u002F\u002Ftwitter.com\u002FGaryMarcus\n  - https:\u002F\u002Ftwitter.com\u002Fylecun\n  - https:\u002F\u002Ftwitter.com\u002Fkarpathy\n  - https:\u002F\u002Ftwitter.com\u002Fpirroh\n  - https:\u002F\u002Ftwitter.com\u002Feerac\n  - https:\u002F\u002Ftwitter.com\u002Fteknium\n  - https:\u002F\u002Ftwitter.com\u002Falignment_lab\n  - https:\u002F\u002Ftwitter.com\u002Fpicocreator\n  - https:\u002F\u002Ftwitter.com\u002Fcharlespacker\n  - https:\u002F\u002Ftwitter.com\u002Fldjconfirmed\n  - https:\u002F\u002Ftwitter.com\u002Fnisten\n  - https:\u002F\u002Ftwitter.com\u002Ffar__el\n  - https:\u002F\u002Ftwitter.com\u002Fi\u002Flists\u002F1713824630241202630\n- News\u002FAggregators\n  - https:\u002F\u002Ftwitter.com\u002Fai__pub\n  - https:\u002F\u002Ftwitter.com\u002FWeirdStableAI\n  - https:\u002F\u002Ftwitter.com\u002Fmultimodalart\n  - https:\u002F\u002Ftwitter.com\u002FLastWeekinAI\n  - https:\u002F\u002Ftwitter.com\u002Fpaperswithcode\n  - https:\u002F\u002Ftwitter.com\u002FDeepLearningAI_\n  - https:\u002F\u002Ftwitter.com\u002Fdl_weekly\n  - https:\u002F\u002Ftwitter.com\u002FslashML\n  - https:\u002F\u002Ftwitter.com\u002F_akhaliq\n  - https:\u002F\u002Ftwitter.com\u002Faaditya_ai\n  - https:\u002F\u002Ftwitter.com\u002Fbentossell\n  - https:\u002F\u002Ftwitter.com\u002Fjohnvmcdonnell\n- Founders\u002FBuilders\u002FVCs\n  - https:\u002F\u002Ftwitter.com\u002Flevelsio\n  - https:\u002F\u002Ftwitter.com\u002Fgoodside\n  - https:\u002F\u002Ftwitter.com\u002Fc_valenzuelab\n  - https:\u002F\u002Ftwitter.com\u002FRaza_Habib496\n  - https:\u002F\u002Ftwitter.com\u002Fsharifshameem\u002Fstatus\u002F1562455690714775552\n  - https:\u002F\u002Ftwitter.com\u002Fgenekogan\u002Fstatus\u002F1555184488606564353\n  - https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1566069427501764613?s=20&t=camPsWtMHdSSEHqWd0K7Ig\n  - https:\u002F\u002Ftwitter.com\u002Famanrsanger\n  - https:\u002F\u002Ftwitter.com\u002Fctjlewis\n  - https:\u002F\u002Ftwitter.com\u002Fsarahcat21\n  - https:\u002F\u002Ftwitter.com\u002FjackclarkSF\n  - https:\u002F\u002Ftwitter.com\u002Falexandr_wang\n  - https:\u002F\u002Ftwitter.com\u002Frameerez\n  - https:\u002F\u002Ftwitter.com\u002Fscottastevenson\n  - https:\u002F\u002Ftwitter.com\u002Fdenisyarats\n- Stability\n  - https:\u002F\u002Ftwitter.com\u002FStabilityAI\n  - https:\u002F\u002Ftwitter.com\u002FStableDiffusion\n  - https:\u002F\u002Ftwitter.com\u002Fhardmaru\n  - https:\u002F\u002Ftwitter.com\u002FJJitsev\n- OpenAI\n  - https:\u002F\u002Ftwitter.com\u002Fsama\n  - https:\u002F\u002Ftwitter.com\u002Filyasut\n  - https:\u002F\u002Ftwitter.com\u002Fmiramurati\n- HuggingFace\n  - https:\u002F\u002Ftwitter.com\u002Fyounesbelkada\n- Artists\n  - https:\u002F\u002Ftwitter.com\u002Fkarenxcheng\u002Fstatus\u002F1564626773001719813\n  - https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\n- Other \n  - Companies\n    - https:\u002F\u002Ftwitter.com\u002FAnthropicAI\n    - https:\u002F\u002Ftwitter.com\u002FAssemblyAI\n    - https:\u002F\u002Ftwitter.com\u002FCohereAI\n    - https:\u002F\u002Ftwitter.com\u002FMosaicML\n    - https:\u002F\u002Ftwitter.com\u002FMetaAI\n    - https:\u002F\u002Ftwitter.com\u002FDeepMind\n    - https:\u002F\u002Ftwitter.com\u002FHelloPaperspace\n- Bots and Apps\n  - https:\u002F\u002Ftwitter.com\u002Fdreamtweetapp\n  - https:\u002F\u002Ftwitter.com\u002Faiarteveryhour\n\n\n## Quotes, Reality & Demotivation\n\n- Narrow, tedium domain usecases https:\u002F\u002Ftwitter.com\u002FWillManidis\u002Fstatus\u002F1584900092615528448 and https:\u002F\u002Ftwitter.com\u002FWillManidis\u002Fstatus\u002F1584900100480192516\n- antihype https:\u002F\u002Ftwitter.com\u002Falexandr_wang\u002Fstatus\u002F1573302977418387457\n- antihype https:\u002F\u002Ftwitter.com\u002Ffchollet\u002Fstatus\u002F1612142423425138688?s=46&t=pLCNW9pF-co4bn08QQVaUg\n- prompt eng memes\n\t- https:\u002F\u002Ftwitter.com\u002F_jasonwei\u002Fstatus\u002F1516844920367054848\n- things stablediffusion struggles with https:\u002F\u002Fopguides.info\u002Fposts\u002Faiartpanic\u002F\n- New Google\n  -  https:\u002F\u002Ftwitter.com\u002Falexandr_wang\u002Fstatus\u002F1585022891594510336\n-  New Powerpoint\n  -  via emad\n-  Appending prompts by default in UI\n  -  DALLE: https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1588588688115912705?s=20&t=0ojpGmH9k6MiEDyVG2I6gg\n- There have been two previous winters, one 1974-1980 and one 1987-1993. https:\u002F\u002Fwww.erichgrunewald.com\u002Fposts\u002Fthe-prospect-of-an-ai-winter\u002F. bit more commentary [here](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=37474528). related - [AI Effect](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fai-paradox-perspective\u002F) - \"once it works its not AI\"\n- It's just matrix multiplication\u002Fstochastic parrots\n\t- Even LLM skeptic Yann LeCun says LLMs have some level of understanding: https:\u002F\u002Ftwitter.com\u002Fylecun\u002Fstatus\u002F1667947166764023808\n\t- Gary Marcus’ “Deep Learning is Hitting a Wall” https:\u002F\u002Fnautil.us\u002Fdeep-learning-is-hitting-a-wall-238440\u002F pushed symbolic systems\n- \"guo lai ren\" antihypers-> worriers\n\t- https:\u002F\u002Fadamkarvonen.github.io\u002Fmachine_learning\u002F2024\u002F03\u002F20\u002Fchess-gpt-interventions.html#next-token-predictors\n\n\n## Legal, Ethics, and Privacy\n\n- NSFW filter https:\u002F\u002Fvickiboykis.com\u002F2022\u002F11\u002F18\u002Fsome-notes-on-the-stable-diffusion-safety-filter\u002F\n- On \"AI Art Panic\" https:\u002F\u002Fopguides.info\u002Fposts\u002Faiartpanic\u002F\n\t- [I lost everything that made me love my job through Midjourney](https:\u002F\u002Fold.reddit.com\u002Fr\u002Fblender\u002Fcomments\u002F121lhfq\u002Fi_lost_everything_that_made_me_love_my_job\u002F)\n\t- [Midjourney artist list](https:\u002F\u002Fwww.theartnewspaper.com\u002F2024\u002F01\u002F04\u002Fleaked-names-of-16000-artists-used-to-train-midjourney-ai#)\n- Yannick influencing OPENRAIL-M https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W5M-dvzpzSQ\n- art schools accepting AI art https:\u002F\u002Ftwitter.com\u002FDaveRogenmoser\u002Fstatus\u002F1597746558145265664\n- DRM issues https:\u002F\u002Fundeleted.ronsor.com\u002Fvoice.ai-gpl-violations-with-a-side-of-drm\u002F\n- stealing art [https:\u002F\u002Fstablediffusionlitigation.com](https:\u002F\u002Fstablediffusionlitigation.com\u002F)\n\t- http:\u002F\u002Fwww.stablediffusionfrivolous.com\u002F\n\t- stable attribution https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=34670136\n\t- coutner argument for disney https:\u002F\u002Ftwitter.com\u002Fjonst0kes\u002Fstatus\u002F1616219435492163584?s=46&t=HqQqDH1yEwhWUSQxYTmF8w\n\t- research on stable diffusion copying https:\u002F\u002Ftwitter.com\u002Fofficialzhvng\u002Fstatus\u002F1620535905298817024?s=20&t=NC-nW7pfDa8nyRD08Lx1Nw This paper used Stable Diffusion to generate 175 million images over 350,000 prompts and only found 109 near copies of training data. Am I right that my main takeaway from this is how good Stable Diffusion is at *not* memorizing training examples?\n- scraping content \n\t- https:\u002F\u002Fblog.ericgoldman.org\u002Farchives\u002F2023\u002F08\u002Fweb-scraping-for-me-but-not-for-thee-guest-blog-post.htm\n\t- sarah silverman case - openai response https:\u002F\u002Farstechnica.com\u002Ftech-policy\u002F2023\u002F08\u002Fopenai-disputes-authors-claims-that-every-chatgpt-response-is-a-derivative-work\u002F\n\t- openai response \n- Licensing\n\t- [AI weights are not open \"source\" - Sid Sijbrandij](https:\u002F\u002Fopencoreventures.com\u002Fblog\u002F2023-06-27-ai-weights-are-not-open-source\u002F)\n- Diversity and Equity\n\t- sexualizing minorities https:\u002F\u002Ftwitter.com\u002Flanadenina\u002Fstatus\u002F1680238883206832129 the reason is [porn is good at bodies](https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1680665706235404288)\n\t- [OpenAI tacking on \"black\" randomly to make DallE diverse](https:\u002F\u002Ftwitter.com\u002Frzhang88\u002Fstatus\u002F1549472829304741888?s=20)\n- Privacy - confidential computing https:\u002F\u002Fwww.edgeless.systems\u002Fblog\u002Fhow-confidential-computing-and-ai-fit-together\u002F\n- AI taking jobs https:\u002F\u002Fdonaldclarkplanb.blogspot.com\u002F2024\u002F02\u002Fthis-is-why-idea-that-ai-will-just.html\n\n## Alignment, Safety\n\n- Anthropic - https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.00861.pdf\n\t- Helpful: attempt to do what is ask. concise, efficient. ask followups. redirect bad questions.\n\t- Honest: give accurate information, express uncertainty. don't imitate responses expected from an expert if it doesn't have the capabilities\u002Fknowledge\n\t- Harmless: not offensive\u002Fdiscriminatory. refuse to assist dangerous acts. recognize when providing sensitive\u002Fconsequential advice\n\t- criticism and boundaries as future direction https:\u002F\u002Ftwitter.com\u002Fdavidad\u002Fstatus\u002F1628489924235206657?s=46&t=TPVwcoqO8qkc7MuaWiNcnw\n- Just Eliezer entire body of work\n\t- https:\u002F\u002Ftwitter.com\u002Fesyudkowsky\u002Fstatus\u002F1625922986590212096\n\t- agi list of lethalities https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FuMQ3cqWDPHhjtiesc\u002Fagi-ruin-a-list-of-lethalities\n\t- note that eliezer has made controversial comments [in the past](https:\u002F\u002Ftwitter.com\u002Fjohnnysands42\u002Fstatus\u002F1641349759754485760?s=46&t=90xQ8sGy63D2OtiaoGJuww) and also in [recent times](https:\u002F\u002Ftwitter.com\u002Florakolodny\u002Fstatus\u002F1641448759086415875?s=46&t=90xQ8sGy63D2OtiaoGJuww) ([TIME article](https:\u002F\u002Ftime.com\u002F6266923\u002Fai-eliezer-yudkowsky-open-letter-not-enough\u002F))\n- Connor Leahy may be a more sane\u002Fmeasured\u002Ftechnically competent version of yud https:\u002F\u002Fovercast.fm\u002F+aYlOEqTJ0\n\t- it's not just paperclip factories\n\t- https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FHBxe6wdjxK239zajf\u002Fwhat-failure-looks-like\n- the 6 month pause letter\n\t- https:\u002F\u002Ffutureoflife.org\u002Fopen-letter\u002Fpause-giant-ai-experiments\u002F\n\t- yann lecun vs andrew ng https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BY9KV8uCtj4\n\t- https:\u002F\u002Fscottaaronson.blog\u002F?p=7174\n\t- [emily bender response](https:\u002F\u002Ftwitter.com\u002Femilymbender\u002Fstatus\u002F1640920936600997889)\n\t- [Geoffrey Hinton leaving Google](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35771104) \n\t- followed up by one sentence public letter https:\u002F\u002Fwww.nytimes.com\u002F2023\u002F05\u002F30\u002Ftechnology\u002Fai-threat-warning.html\n- xrisk \n\t\t- Is avoiding extinction from AI really an urgent priority? ([link](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002F5J8WPrGlKFK1BUsRYoWIfdCHPD-3Xbi8FugDN8_LxoMLoHhMJlEG7wG6Qm_xTk5kjhv7y5vwidMdRiSXu8XoBiq8nEOR34GaAFwHPM3qm-KgbLw6_hl3AQd9rRxt7mbTHvXRNeF6hfODzGg5z4t8D3ZdIldVTpoAGQ-KmKNEnmzBudTJIJtP1kjZLr1QqJYX\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh13\u002FXrV7_YgyheO615JC1X8VasmPENc7KRnJrp03iAlmoXw)) \n\t-   AI Is not an arms race. ([link](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002FznicDlvJFyGBhcMAVWxZFpwlt5VC0YnUsV4gzm_4ut3qiUuoiY9_n0aSS6Uv0inD2_kx5JhKOVXSRbXMrV7VwL_fuIMlfwAiTSTTCxo56Xv58IWHdUClCfyt4alUnKRf2MV5a7rIM0KG4vwVLObEua0i3t5UIvPlbHybyFluj52xGYswNiQUMZl2OrDzh1u4oLAvnCVkTUi5vCX0i6-N8A\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh14\u002FK2LmS7FyAGW-u4j6oHnp_bKapwqFG_Gb4MC5XPpKJsM)) \n\t-   If we’re going to label AI an ‘extinction risk,’ we need to clarify how it could happen. ([link](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002FznicDlvJFyGBhcMAVWxZFsLJphRoW5fZiwv4ALj3pNMBRHKVGkJIME1sXnwK-P46O3jH_jtoC_wqyCeroi2bRUKEUKd_QQvXSoMgu3Nqbw99wsPjSDl_Lt6RSk7bni0KT4c1-gstNpWdPoUbj3air5NbOAbvtp5P9ds1xCm4qG-6dvoJELH0HHB7G9FO2ZFlXPTm37nswLD77q6opSiWnrTEHhHsCo37yO01bFol4LeaSr8F4e_WynvF0QrKLNaSKf0rDpyMSn__lxmbRl6M1A\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh15\u002FSYpE89X1W3Z_qSjH8YJmhLYYRRgjUHJzn2WILhBIcxw))\n- OpenAI superalignment https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ZP_N4q5U3eE\n\n### regulation\n\n- chinese regulation https:\u002F\u002Fwww.chinalawtranslate.com\u002Fen\u002Foverview-of-draft-measures-on-generative-ai\u002F\n\t- https:\u002F\u002Ftwitter.com\u002Fmmitchell_ai\u002Fstatus\u002F1647697067006111745?s=46&t=90xQ8sGy63D2OtiaoGJuww\n\t- China is the only major world power that explicitly [regulates](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0th3q3n_V1-WJV7CgH8lW4wLFDD1Q5sD1W6QG0gj2gQKZ5W2WNS9Z5gKTB8W6jF2Dc8ltmWfW1kwRcc4LNmnNW2_F-zw6rWXtDN8M32V9_0Z1cN1gwSlkLF9WBW6yYMS68JLJYjN1wstfhr0tvgW5DCclJ4zMFhNN6tQ4vt1P5bVW5w-L-275lv9LW5zhjMk7CCjjcW20ChgZ57-8l2W50dQgR1_tfL-VqXDdY2t227nVzlNDX4m43yWW4D6GXl6Mf9JvW3qShZ085BMXqW5S2j7D4VWf5lW4c37Wn4lbf-NW4W6Hxl3CCDHRW451x4X8wNPKHW5zc90X90FjXcW97Qn_B7RdzpP3nQX1) generative AI\n- italy banning chatgpt\n- -   At its annual meeting in Japan, the Group of Seven (G7), an informal bloc of industrialized democratic governments, [announced](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s_3q3nJV1-WJV7CgFv5W51g32V2hBgR-N3j2W3szNMJlW80w4Xv5Gg2S8N4_ZHQFYd4cRW8yvm4F2zg5qpW5xfrS61fJ8H4W49Nj5Y2zWcRbW97ym606Vq3X6W2-51W529GnLcW2zlMRl3qKmBCW8jd69B7nRzmFV5K0lP4FzrchW6nxHbj1vFJPqN3sbnlvFM2WhW6PNj-t5YfVS3W6pl7681yBKGxN1R1Mbj8wWj4W22BS_g1BH_1yW7pT8c47QKBQFW64WfHc80PxjRV6dQN42mCqRMW3yJrxC3DX4_5W5yqFbL34kwc0W770qZv2fjyv03bJQ1) the Hiroshima Process, an intergovernmental task force empowered to investigate risks of generative AI. G7 members, which include Canada, France, Germany, Italy, Japan, the United Kingdom, and the United States, vowed to craft mutually compatible laws and regulate AI according to democratic values. These include fairness, accountability, transparency, safety, data privacy, protection from abuse, and respect for human rights.\n-   U.S. President Joe Biden [issued](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s55nCT_V3Zsc37CgQX9W7wTfL38m-2KKW3mGNtx8sgMgJW10rjg65dMw5qN3jtZLMqRgQbV_3DXH2yr2HbW4vs2Tm43thGvW6fK8f72N6w37N53TdBst-8D1W6yzHrb70MHkTW1ckbRd5NfDP9W2j6yWK34KFvtW18lscs3lQ0G6W4GFgyx486-vdW5NJBQv4tvxYpW36FqGc4md2XfW2Fgj6n2fd-BSW3PyPVH9bD8W3N61PDTSyzVy1W2QSSm07tHjwWW8zG-Kl3TPwmfVMNjLb7Nnhk4W2B_zlf7n91mNW806djL3zxyMFW5RpR1Q9kcL0yW7ss_7m92D7Z-W4fWJYk3xBb3yN5bZbNkSvb14N2kgsftyLf7cN1WmZDl5Sw63W4FcWFn65g7DsVzPJZP2qtH36W3vfw782XRtSbW834rhB5jGZ7RW6K9z1d87ns4N38SY1) a strategic plan for AI. The initiative calls on U.S. regulatory agencies to develop public datasets, benchmarks, and standards for training, measuring, and evaluating AI systems.\n-   Earlier this month, France’s data privacy regulator [announced](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s_3q3nJV1-WJV7CgTpxW8C6yq247bfj8W4mQv0-4hl35_W8SPtZ52JXPlxW1Fkb5p54f30RW6sj0m71XsJ4yF7-b6kBx5vTW7cwGKJ6RcqpFW5325sQ2R54VbW79rbsP4wh6MyW2MwyS_6CSJfwW8VBz1y1M5_4nW2nhxPD5vZw17MCVDrTvH8ljW1JYH0t8DPm23W3BPQvW69f5TFW5ms3_413vDbJVw9GyW1yMYBfW6zpGVw12swbdV_wmsh11rtb0Vlzk0b6ZkhpZW1XWkdG7yNYpsW38p95C5jXCx7W4qrc4w1_q_sdW5RD3Jv7bdxpv2Gp1) a framework for regulating generative AI.\n- regulation vs Xrisk https:\u002F\u002F1a3orn.com\u002Fsub\u002Fessays-regulation-stories.html\n- [Multimodal Prompt Injection in GPT4V](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=37877605) \n\n## Misc\n\n- Whisper\n  - https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsensahin\u002FYouWhisper YouWhisper converts Youtube videos to text using openai\u002Fwhisper.\n  - https:\u002F\u002Ftwitter.com\u002Fjeffistyping\u002Fstatus\u002F1573145140205846528 youtube whipserer\n  - multilingual subtitles https:\u002F\u002Ftwitter.com\u002F1littlecoder\u002Fstatus\u002F1573030143848722433\n  - video subtitles https:\u002F\u002Ftwitter.com\u002Fm1guelpf\u002Fstatus\u002F1574929980207034375\n  - you can join whisper to stable diffusion for reasons https:\u002F\u002Ftwitter.com\u002Ffffiloni\u002Fstatus\u002F1573733520765247488\u002Fphoto\u002F1\n  - known problems https:\u002F\u002Ftwitter.com\u002Flunixbochs\u002Fstatus\u002F1574848899897884672 (edge case with catastrophic failures)\n- textually guided audio https:\u002F\u002Ftwitter.com\u002FFelixKreuk\u002Fstatus\u002F1575846953333579776\n- Codegen\n  - CodegeeX https:\u002F\u002Ftwitter.com\u002Fthukeg\u002Fstatus\u002F1572218413694726144\n  - https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeGen https:\u002F\u002Fjoel.tools\u002Fcodegen\u002F\n- pdf to structured data - Impira used t to do it (dead link: https:\u002F\u002Fwww.impira.com\u002Fblog\u002Fhey-machine-whats-my-invoice-total) but if you look hard enough on twitter there are some alternatives\n- text to Human Motion diffusion https:\u002F\u002Ftwitter.com\u002FGuyTvt\u002Fstatus\u002F1577947409551851520\n  - abs: https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14916 \n  - project page: https:\u002F\u002Fguytevet.github.io\u002Fmdm-page\u002F\n","# AI 笔记\n\n关于人工智能最新进展的笔记，重点关注生成式模型和大型语言模型。这些内容是 https:\u002F\u002Flspace.swyx.io\u002F 新闻通讯的“原材料”。\n\n> 此仓库曾名为 https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fprompt-eng，但因 [提示工程被过度炒作](https:\u002F\u002Ftwitter.com\u002Fswyx\u002Fstatus\u002F1596184757682941953) 而更名。现在它是一个关于 [AI 工程](https:\u002F\u002Fwww.latent.space\u002Fp\u002Fai-engineer) 的笔记仓库。\n\n本 README 只是对该领域的高层次概述；更多更新请参阅此仓库中的其他 Markdown 文件：\n\n- `TEXT.md` - 文本生成，主要基于 GPT-4\n\t- `TEXT_CHAT.md` - 关于 ChatGPT 及其竞争对手，以及衍生产品的信息\n\t- `TEXT_SEARCH.md` - 关于 GPT-4 支持的语义搜索及其他相关信息\n\t- `TEXT_PROMPTS.md` - 一份小型的 [提示词库](https:\u002F\u002Fwww.swyx.io\u002Fswipe-files-strategy)，收录了一些优秀的 GPT-3 提示词\n- `INFRA.md` - 关于 AI 基础设施、硬件及扩展的原始笔记\n- `AUDIO.md` - 跟踪音频\u002F音乐\u002F语音转录与生成相关进展\n- `CODE.md` - 代码生成模型，例如 Copilot\n- `IMAGE_GEN.md` - 内容最为丰富的一份文件，重点介绍 Stable Diffusion，并简要提及 MidJourney 和 DALL·E。\n\t- `IMAGE_PROMPTS.md` - 一份小型的 [提示词库](https:\u002F\u002Fwww.swyx.io\u002Fswipe-files-strategy)，收录了一些优秀的图像提示词\n- **资源**：整理好的常用资源，适合永久链接引用\n- **草稿笔记** - 针对未来可能覆盖主题的极简轻量级初稿页面\n\t\t  - `AGENTS.md` - 跟踪“智能体式 AI”\n- **博客选题** - 根据这些笔记提炼出的潜在博客文章选题，因为…\n\n\u003C!-- START doctoc generated TOC please keep comment here to allow auto update -->\n\u003C!-- DON'T EDIT THIS SECTION, INSTEAD RE-RUN doctoc TO UPDATE -->\n\u003Cdetails>\n\u003Csummary>目录\u003C\u002Fsummary>\n\n- [激励性应用场景](#motivational-use-cases)\n- [AI 精选阅读](#top-ai-reads)\n- [社区](#communities)\n- [人物](#people)\n- [杂项](#misc)\n- [名言、现实与反思](#quotes-reality--demotivation)\n- [法律、伦理与隐私](#legal-ethics-and-privacy)\n\n\u003C\u002Fdetails>\n\u003C!-- END doctoc generated TOC please keep comment here to allow auto update -->\n\n## 激励性应用场景\n\n- 图像\n  - https:\u002F\u002Fmpost.io\u002Fbest-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts\n  - [3D MRI 合成脑部图像](https:\u002F\u002Ftwitter.com\u002FWarvito\u002Fstatus\u002F1570691960792580096?) - [神经影像统计学家的积极反馈](https:\u002F\u002Ftwitter.com\u002FdanCMDstat\u002Fstatus\u002F1572312699853312000?s=20&t=x-ouUbWA5n0-PxTGZcy2iA)\n  - [多人版 Stable Diffusion](https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fhuggingface-projects\u002Fstable-diffusion-multiplayer?roomid=room-0)\n- 视频\n  - 著名电影场景的 img2img（[爱乐之城](https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1565678995986911236)）\n    - [演员转换的 img2img](https:\u002F\u002Ftwitter.com\u002FLighthiserScott\u002Fstatus\u002F1567355079228887041?s=20&t=cBH4EGPC4r0Earm-mDbOKA)，使用 ebsynth + koe_recast\n    - ebsynth 的工作原理 https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1612047103806545923?s=20\n  - 虚拟时尚（[karenxcheng](https:\u002F\u002Ftwitter.com\u002Fkarenxcheng\u002Fstatus\u002F1564626773001719813)）\n  - [无缝拼接图像](https:\u002F\u002Ftwitter.com\u002Freplicatehq\u002Fstatus\u002F1568288903177859072?s=20&t=sRd3HRehPMcj1QfcOwDMKg)\n  - 场景演变（[xander](https:\u002F\u002Ftwitter.com\u002Fxsteenbrugge\u002Fstatus\u002F1558508866463219712)）\n  - 外延绘画 https:\u002F\u002Ftwitter.com\u002Forbamsterdam\u002Fstatus\u002F1568200010747068417?s=21&t=rliacnWOIjJMiS37s8qCCw\n  - WebUI img2img 协作 https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1563582621757898752\n  - 图像转视频并旋转 https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1571096804539912192\n  - “提示绘画” https:\u002F\u002Ftwitter.com\u002F1littlecoder\u002Fstatus\u002F1572573152974372864\n  - 将你的脸动画化为音频2视频 https:\u002F\u002Ftwitter.com\u002Fsiavashg\u002Fstatus\u002F1597588865665363969\n  - 实体玩具转 3D 模型 + 动画 https:\u002F\u002Ftwitter.com\u002Fsergeyglkn\u002Fstatus\u002F1587430510988611584\n  - 音乐视频\n    - [Video Killed the Radio Star](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=WJaxFbdjm8c)，[Colab](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fdmarx\u002Fvideo-killed-the-radio-star\u002Fblob\u002Fmain\u002FVideo_Killed_The_Radio_Star_Defusion.ipynb)。该项目使用 OpenAI 的 Whisper 语音转文本功能，允许你输入 YouTube 视频，并根据视频中的歌词生成 Stable Diffusion 动画。\n    - [Stable Diffusion 视频](https:\u002F\u002Fcolab.research.google.com\u002Fgithub\u002Fnateraw\u002Fstable-diffusion-videos\u002Fblob\u002Fmain\u002Fstable_diffusion_videos.ipynb)，通过在提示词和音频之间插值生成视频。\n  - 直接的文本2视频项目\n    - https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1575546841533497344\n    - https:\u002F\u002Fmakeavideo.studio\u002F - 探索者 https:\u002F\u002Fwebvid.datasette.io\u002Fwebvid\u002Fvideos\n    - https:\u002F\u002Fphenaki.video\u002F\n    - https:\u002F\u002Fgithub.com\u002FTHUDM\u002FCogVideo\n    - https:\u002F\u002Fimagen.research.google\u002Fvideo\u002F\n- 文本转 3D https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1575541930905243652\n  - https:\u002F\u002Fdreamfusion3d.github.io\u002F\n  - 开源实现：https:\u002F\u002Fgithub.com\u002Fashawkey\u002Fstable-dreamfusion\n    - 演示 https:\u002F\u002Ftwitter.com\u002F_akhaliq\u002Fstatus\u002F1578035919403503616\n- 文本类产品\n  - 在文末列出了用例清单 https:\u002F\u002Fhuyenchip.com\u002F2023\u002F04\u002F11\u002Fllm-engineering.html\n  - Jasper\n  - GPT for Obsidian https:\u002F\u002Freasonabledeviations.com\u002F2023\u002F02\u002F05\u002Fgpt-for-second-brain\u002F\n  - gpt3 邮件 https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fgpt3-email 和 [邮件聚类](https:\u002F\u002Fgithub.com\u002Fdanielgross\u002Fembedland\u002Fblob\u002Fmain\u002Fbench.py#L281)\n  - Google 表格中的 gpt3() [2020](https:\u002F\u002Ftwitter.com\u002Fpavtalk\u002Fstatus\u002F1285410751092416513?s=20&t=ppZhNO_OuQmXkjHQ7dl4wg)，[2022](https:\u002F\u002Ftwitter.com\u002Fshubroski\u002Fstatus\u002F1587136794797244417) - [表格](https:\u002F\u002Fdocs.google.com\u002Fspreadsheets\u002Fd\u002F1YzeQLG_JVqHKz5z4QE9wUsYbLoVZZxbGDnj7wCf_0QQ\u002Fedit) Google 表格 https:\u002F\u002Ftwitter.com\u002Fmehran__jalali\u002Fstatus\u002F1608159307513618433\n    - https:\u002F\u002Fgpt3demo.com\u002Fapps\u002Fgoogle-sheets\n    - Charm https:\u002F\u002Ftwitter.com\u002Fshubroski\u002Fstatus\u002F1620139262925754368?s=20\n  - https:\u002F\u002Fwww.summari.com\u002F Summari 帮助忙碌的人们阅读更多内容。\n- 市场地图\u002F格局\n  - Elad Gil 2024 [堆栈图](https:\u002F\u002Fblog.eladgil.com\u002Fp\u002Fthings-i-dont-know-about-ai)\n  - Sequoia 市场地图 [2023 年 1 月](https:\u002F\u002Ftwitter.com\u002Fsonyatweetybird\u002Fstatus\u002F1584580362339962880)，[2023 年 7 月](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fllm-stack-perspective\u002F)，[2023 年 9 月](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fgenerative-ai-act-two\u002F)\n  - Base10 市场地图 https:\u002F\u002Ftwitter.com\u002Fletsenhance_io\u002Fstatus\u002F1594826383305449491\n  - Matt Shumer 市场地图 https:\u002F\u002Ftwitter.com\u002Fmattshumer_\u002Fstatus\u002F1620465468229451776 https:\u002F\u002Fdocs.google.com\u002Fdocument\u002Fd\u002F1sewTBzRF087F6hFXiyeOIsGC1N4N3O7rYzijVexCgoQ\u002Fedit\n  - NFX https:\u002F\u002Fwww.nfx.com\u002Fpost\u002Fgenerative-ai-tech-5-layers?ref=context-by-cohere\n  - a16z https:\u002F\u002Fa16z.com\u002F2023\u002F01\u002F19\u002Fwho-owns-the-generative-ai-platform\u002F\n    - https:\u002F\u002Fa16z.com\u002F2023\u002F06\u002F20\u002Femerging-architectures-for-llm-applications\u002F\n    - https:\u002F\u002Fa16z.com\u002F100-gen-ai-apps\n  - Madrona https:\u002F\u002Fwww.madrona.com\u002Ffoundation-models\u002F\n  - Coatue\n    - https:\u002F\u002Fwww.coatue.com\u002Fblog\u002Fperspective\u002Fai-the-coming-revolution-2023\n    - https:\u002F\u002Fx.com\u002FSam_Awrabi\u002Fstatus\u002F1742324900034150646?s=20\n- 游戏资源 -\n  - Emad 的帖子 https:\u002F\u002Ftwitter.com\u002FEMostaque\u002Fstatus\u002F1591436813750906882\n  - scenario.gg https:\u002F\u002Ftwitter.com\u002Femmanuel_2m\u002Fstatus\u002F1593356241283125251\n  - [3D 游戏角色建模示例](https:\u002F\u002Fwww.traffickinggame.com\u002Fai-assisted-graphics\u002F)\n  - MarioGPT https:\u002F\u002Farxiv.org\u002Fpdf\u002F2302.05981.pdf https:\u002F\u002Fwww.slashgear.com\u002F1199870\u002Fmariogpt-uses-ai-to-generate-endless-super-mario-levels-for-free\u002F https:\u002F\u002Fgithub.com\u002Fshyamsn97\u002Fmario-gpt\u002Fblob\u002Fmain\u002Fmario_gpt\u002Flevel.py\n  - https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=36295227\n\n## 顶级 AI 阅读材料\n\n更高级的 GPT3 相关阅读已被整理至 https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fai-notes\u002Fblob\u002Fmain\u002FTEXT.md\n\n- https:\u002F\u002Fwww.gwern.net\u002FGPT-3#prompts-as-programming\n- https:\u002F\u002Flearnprompting.org\u002F\n\n### 初学者读物\n\n  - [Karpathy 2025年大语言模型入门](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=7xTGNNLPyMI) ([摘要](https:\u002F\u002Fanfalmushtaq.com\u002Farticles\u002Fdeep-dive-into-llms-like-chatgpt-tldr))\n  - [比尔·盖茨谈AI](https:\u002F\u002Fwww.gatesnotes.com\u002FThe-Age-of-AI-Has-Begun) ([推文](https:\u002F\u002Ftwitter.com\u002Fgdb\u002Fstatus\u002F1638310597325365251?s=20))\n\t  - “AI的发展与微处理器、个人电脑、互联网和手机的诞生同样具有根本性意义。它将改变人们的工作、学习、旅行、医疗保健以及彼此交流的方式。”\n  - [Steve Yegge谈开发者如何利用AI](https:\u002F\u002Fabout.sourcegraph.com\u002Fblog\u002Fcheating-is-all-you-need)\n  - [Karpathy 2023年大语言模型入门](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zjkBMFhNj_g)（来自[Sarah Chieng](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1729569057475879103)的笔记）\n  - [NeurIPS会议上OpenAI发布的提示工程指南](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1741926266087870784)，由Sarah Chieng分享\n  - [为什么这一波AI浪潮可能是真正的突破](https:\u002F\u002Fwww.thenewatlantis.com\u002Fpublications\u002Fwhy-this-ai-moment-may-be-the-real-deal)\n  - Sam Altman - [万物的摩尔定律](https:\u002F\u002Fmoores.samaltman.com\u002F)\n  - MSR关于基础模型的优秀入门介绍：https:\u002F\u002Fyoutu.be\u002FHQI6O5DlyFc\n  - OpenAI提示教程：https:\u002F\u002Fbeta.openai.com\u002Fdocs\u002Fquickstart\u002Fadd-some-examples\n  - Google LAMDA简介：https:\u002F\u002Faitestkitchen.withgoogle.com\u002Fhow-lamda-works\n  - Karpathy的梯度下降课程\n  - FT的可视化故事：“[Transformer的工作原理](https:\u002F\u002Fig.ft.com\u002Fgenerative-ai\u002F)”\n  - DALL-E 2提示编写手册：http:\u002F\u002Fdallery.gallery\u002Fwp-content\u002Fuploads\u002F2022\u002F07\u002FThe-DALL%C2%B7E-2-prompt-book-v1.02.pdf\n  - https:\u002F\u002Fmedium.com\u002Fnerd-for-tech\u002Fprompt-engineering-the-career-of-future-2fb93f90f117\n  - [如何用AI完成各种任务](https:\u002F\u002Fwww.oneusefulthing.org\u002Fp\u002Fhow-to-use-ai-to-do-stuff-an-opinionated)，涵盖获取信息、处理数据和生成图像等方面\n  - https:\u002F\u002Fourworldindata.org\u002Fbrief-history-of-ai，附带精美图表的AI发展历程概述\n  - Jon Stokes的[AI内容生成，第一部分：机器学习基础](https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fai-content-generation-part-1-machine)\n  - [吴恩达——AI领域的机遇](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=5p248yoa3oE)\n  - [什么是Transformer模型？它们是如何工作的？](https:\u002F\u002Ftxt.cohere.ai\u002Fwhat-are-transformer-models\u002F)——或许[略显过于高阶](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35577138)\n  - 文本生成\n\t  - Humanloop的[提示工程入门](https:\u002F\u002Fwebsite-olo3k29b2-humanloopml.vercel.app\u002Fblog\u002Fprompt-engineering-101)\n\t  - Stephen Wolfram的解释：https:\u002F\u002Fwritings.stephenwolfram.com\u002F2023\u002F02\u002Fwhat-is-chatgpt-doing-and-why-does-it-work\u002F\n\t  - Jon Stokes的类似内容：jonstokes.com\u002Fp\u002Fthe-chat-stack-gpt-4-and-the-near\n\t  - https:\u002F\u002Fandymatuschak.org\u002Fprompts\u002F\n\t  - Cohere的LLM大学：https:\u002F\u002Fdocs.cohere.com\u002Fdocs\u002Fllmu \n\t\t  - Jay Alammar的全面指南：https:\u002F\u002Fllm.university\u002F\n\t  - https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fchatgpt-explained-a-guide-for-normies，面向普通用户的ChatGPT详解\n  - 图像生成\n\t  - https:\u002F\u002Fwiki.installgentoo.com\u002Fwiki\u002FStable_Diffusion概览\n\t  - https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx41n87\u002Fhow_to_get_images_that_dont_suck_a\u002F\n\t  - https:\u002F\u002Fmpost.io\u002Fbest-100-stable-diffusion-prompts-the-most-beautiful-ai-text-to-image-prompts\u002F\n\t  - https:\u002F\u002Fwww.kdnuggets.com\u002F2021\u002F03\u002Fbeginners-guide-clip-model.html \n\t  - https:\u002F\u002Fwww.seangoedecke.com\u002Fdiffusion-models-explained\u002F\n  - 非技术类\n    - https:\u002F\u002Fwww.jonstokes.com\u002Fp\u002Fai-content-generation-part-1-machine\n    - https:\u002F\u002Fwww.protocol.com\u002Fgenerative-ai-startup-landscape-map\n    - https:\u002F\u002Ftwitter.com\u002Fsaranormous\u002Fstatus\u002F1572791179636518913\n\n### 中级读物\n\n- **AI发展状况报告**: [2018](https:\u002F\u002Fwww.stateof.ai\u002F2018), [2019](https:\u002F\u002Fwww.stateof.ai\u002F2019), [2020](https:\u002F\u002Fwww.stateof.ai\u002F2020), [2021](https:\u002F\u002Fwww.stateof.ai\u002F2021), [2022](https:\u002F\u002Fwww.stateof.ai\u002F)\n  - 按时间倒序排列的重大事件：https:\u002F\u002Fbleedingedge.ai\u002F\n  - [我们对大语言模型的了解](https:\u002F\u002Fwillthompson.name\u002Fwhat-we-know-about-llms-primer#block-920907dc37394adcac5bf4e7318adc10) - 一份出色的研究综述\n  - [Karpathy关于大语言模型的1小时指南](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zjkBMFhNj_g) - 摘要来自[Sarah Chieng](https:\u002F\u002Ftwitter.com\u002FSarahChieng\u002Fstatus\u002F1729569057475879103)\n\t  - 1. 什么是大型语言模型（LLM）？\n\t\t  - LLM主要有两个组成部分\n\t\t    -   LLM的作用是什么？\n\t1. 如何创建一个LLM？\n\t    -   第一阶段：模型预训练\n\t    -   第二阶段：模型微调\n\t        -   第二步b：[可选] 进一步微调\n\t    -   第三阶段：模型推理\n\t    -   第四阶段：[可选] 通过定制化增强LLM性能\n\t1. 当前的LLM“排行榜”\n\t2. LLM的未来：接下来会怎样？\n\t    -   如何提升LLM的表现？\n\t        -   LLM扩展定律\n\t        -   自我改进\n\t    -   如何提升LLM的能力？\n\t        -   多模态\n\t        -   系统1+2思维\n\t1. LLM的“黑暗艺术”\n\t    -   越狱\n\t    -   提示注入\n\t    -   数据投毒与后门攻击\n\t- [Evan Morikawa关于LLM数学的指南](https:\u002F\u002Fnewsletter.pragmaticengineer.com\u002Fp\u002Fscaling-chatgpt)，尤其是其中关于5大扩展挑战的部分\n  -  [黑客视角下的语言模型指南](https:\u002F\u002Ftwitter.com\u002Fjeremyphoward\u002Fstatus\u002F1705883362991472984?s=20)  ([YouTube](https:\u002F\u002Fyoutu.be\u002FjkrNMKz9pWU?si=BNz-v6VmdbX7QDtr)) Jeremy Howard的90分钟全面概述，从基础开始：所有现代LLM采用的三步预训练\u002F微调\u002F分类器ULMFiT方法。\n  - https:\u002F\u002Fspreadsheets-are-all-you-need.ai\n  - [\"跟上大语言模型的奇妙世界\"](https:\u002F\u002Fsimonwillison.net\u002F2023\u002FAug\u002F3\u002Fweird-world-of-llms\u002F) - Simon Willison的40分钟概述 + [AI工程师的开放问题](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=AjLVoAu-u-Q)\n  - [Flyte关于LLM的概述](https:\u002F\u002Fflyte.org\u002Fblog\u002Fgetting-started-with-large-language-models-key-things-to-know#what-are-llms)\n  - Clementine Fourrier讲解[评估如何进行](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fclefourrier\u002Fllm-evaluation)\n  - [VLM从零到英雄](https:\u002F\u002Fgithub.com\u002FSkalskiP\u002Fvlms-zero-to-hero) ([推文](https:\u002F\u002Fx.com\u002Fskalskip92\u002Fstatus\u002F1871247056343322624\u002Fphoto\u002F1))\n  - [构建基于LLM系统和产品的模式](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F) - 非常好的总结\n\t  - [评估](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#evals-to-measure-performance): 用于衡量性能\n\t-   [RAG](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#retrieval-augmented-generation-to-add-knowledge): 用于添加最新的外部知识\n\t-   [微调](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#fine-tuning-to-get-better-at-specific-tasks): 用于提升特定任务的表现\n\t-   [缓存](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#caching-to-reduce-latency-and-cost): 用于降低延迟和成本\n\t-   [护栏](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#guardrails-to-ensure-output-quality): 用于确保输出质量\n\t-   [防御性用户体验](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#defensive-ux-to-anticipate--handle-errors-gracefully): 用于提前预测并优雅地处理错误\n\t-   [收集用户反馈](https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Fllm-patterns\u002F#collect-user-feedback-to-build-our-data-flywheel): 用于构建我们的数据飞轮\n  - [向量数据库：技术入门[PDF]](https:\u002F\u002Ftge-data-web.nyc3.digitaloceanspaces.com\u002Fdocs\u002FVector%20Databases%20-%20A%20Technical%20Primer.pdf)，非常棒的向量数据库幻灯片\n\t  - 缺少对混合搜索（向量+词法）的介绍。[更多讨论](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=38971221)\n  - [机器学习视觉导论](http:\u002F\u002Fwww.r2d3.us\u002Fvisual-intro-to-machine-learning-part-1\u002F)\n  - A16z AI经典文献 https:\u002F\u002Fa16z.com\u002F2023\u002F05\u002F25\u002Fai-canon\u002F\n\t  -  **[软件2.0](https:\u002F\u002Fkarpathy.medium.com\u002Fsoftware-2-0-a64152b37c35)**: Andrej Karpathy是最早清晰阐述（2017年！）新一轮AI浪潮为何如此重要的专家之一。他的观点是，AI是一种全新且强大的编程方式。随着LLM的迅速进步，这一理论愈发具有前瞻性，也为理解AI市场的发展方向提供了良好的思维框架。\n\t-   **[GPT现状](https:\u002F\u002Fbuild.microsoft.com\u002Fen-US\u002Fsessions\u002Fdb3f4859-cd30-4445-a0cd-553c3304f8e2)**: 同样出自Karpathy之手，这篇材料以通俗易懂的方式解释了ChatGPT\u002FGPT模型的工作原理、使用方法以及研发可能的方向。\n\t-   [**ChatGPT在做什么……为什么它能成功？**](https:\u002F\u002Fwritings.stephenwolfram.com\u002F2023\u002F02\u002Fwhat-is-chatgpt-doing-and-why-does-it-work\u002F): 计算机科学家兼企业家Stephen Wolfram从基本原理出发，用深入浅出的语言详细解释了现代AI模型的工作机制。他梳理了从早期神经网络到如今的LLM及ChatGPT的发展脉络。\n\t-   **[Transformer详解](https:\u002F\u002Fdaleonai.com\u002Ftransformers-explained)**: Dale Markowitz的这篇短文直接回答了“什么是LLM，它是如何工作的？”这一问题。对于初学者来说，这是一个很好的入门方式，有助于培养对这项技术的直观理解。虽然文章最初是针对GPT-3撰写的，但同样适用于更新的模型。\n\t-   **[Stable Diffusion的工作原理](https:\u002F\u002Fmccormickml.com\u002F2022\u002F12\u002F21\u002Fhow-stable-diffusion-works\u002F)**: 这是计算机视觉领域中与上一篇类似的说明。Chris McCormick用通俗的语言解释了Stable Diffusion的工作原理，并帮助读者建立对文本到图像模型的整体认识。如果想要更轻松的入门，可以参考r\u002FStableDiffusion社区中的这张[漫画](https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fzs5dk5\u002Fi_made_an_infographic_to_explain_how_stable\u002F)。\n\t\t- (2025) 3blue1brown关于[扩散过程的工作原理](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=iv-5mZ_9CPY)\n\t- 解释类资源\n\t\t-   [**深度学习要点：核心概念**](https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fdeep-learning-nutshell-core-concepts\u002F): NVIDIA推出的四部分系列文章，介绍了2015年时深度学习的基础知识，非常适合刚开始接触AI的人士。\n\t\t-   **[面向程序员的实用深度学习课程](https:\u002F\u002Fcourse.fast.ai\u002F)**: 一门全面且免费的课程，通过实际案例和代码讲解AI的基础知识。\n\t\t-   **[Word2vec详解](https:\u002F\u002Ftowardsdatascience.com\u002Fword2vec-explained-49c52b4ccb71)**: 一种简单易懂的嵌入和标记介绍，它们是LLM（以及所有语言模型）的基本构建块。\n\t\t\t- https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=44708028\n\t\t-   **[是的，你应该理解反向传播](https:\u002F\u002Fkarpathy.medium.com\u002Fyes-you-should-understand-backprop-e2f06eab496b)**: 如果你想深入了解反向传播的细节，这篇更深入的文章值得一读。若想进一步学习，可以观看[斯坦福CS231n讲座](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=i94OvYb6noo)（[课程链接](http:\u002F\u002Fcs231n.stanford.edu\u002F2016\u002F))。\n\t- 课程\n\t\t-   **[斯坦福CS229](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rMiGQp3WXShtMGgzqpfVfbU)**: Andrew Ng主讲的机器学习入门课程，涵盖机器学习的基础知识。\n\t\t-   **[斯坦福CS224N](https:\u002F\u002Fwww.youtube.com\u002Fplaylist?list=PLoROMvodv4rOSH4v6133s9LFPRHjEmbmJ)**: Chris Manning主讲的深度学习自然语言处理课程，从NLP基础知识讲起，直至第一代LLM。\n  - https:\u002F\u002Fgithub.com\u002Fmlabonne\u002Fllm-course\n  - https:\u002F\u002Fcims.nyu.edu\u002F~sbowman\u002Feightthings.pdf\n\t  1. 随着投入的增加，LLM的能力会按预期不断提升，即使没有针对性的创新也是如此。\n\t  2. 许多重要的LLM行为会在投入增加的过程中作为副产品意外出现。\n\t  3. LLM似乎经常能够学习并使用关于外部世界的表征。\n\t  4. 目前尚无可靠的方法来引导LLM的行为。\n\t  5. 专家们还无法解释LLM内部的具体运作机制。\n\t  6. 人类在某项任务上的表现并非LLM性能的上限。\n\t  7. LLM无需表达其创造者的价值观，也不必遵循网络文本中所蕴含的价值观。\n\t  8. 与LLM的短暂交互往往会产生误导。\n\t  9. simonw强调了https:\u002F\u002Ffedi.simonwillison.net\u002F@simon\u002F110144185463887790\n  - LLM研究中的10个开放性挑战 https:\u002F\u002Fhuyenchip.com\u002F2023\u002F08\u002F16\u002Fllm-research-open-challenges.html\n  - OpenAI提示工程食谱 https:\u002F\u002Fgithub.com\u002Fopenai\u002Fopenai-cookbook\u002Fblob\u002Fmain\u002Ftechniques_to_improve_reliability.md\n  - 关于提示工程的概述 https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-03-15-prompt-engineering\u002F\n  - https:\u002F\u002Fmoultano.wordpress.com\u002F2023\u002F06\u002F28\u002Fthe-many-ways-that-digital-minds-can-know\u002F 对比搜索与AI\n  - 2022年主要AI进展回顾 https:\u002F\u002Fwww.deeplearning.ai\u002Fthe-batch\u002Fissue-176\u002F\n  - DALLE2资产生成+修复绘画 https:\u002F\u002Ftwitter.com\u002Faifunhouse\u002Fstatus\u002F1576202480936886273?s=20&t=5EXa1uYDPVa2SjZM-SxhCQ\n  - suhail旅程 https:\u002F\u002Ftwitter.com\u002FSuhail\u002Fstatus\u002F1541276314485018625?s=20&t=X2MVKQKhDR28iz3VZEEO8w\n  - 可组合扩散 - “AND”代替“and” https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\u002Fstatus\u002F1580293860902985728\n  - 关于BPE分词 https:\u002F\u002Ftowardsdatascience.com\u002Fbyte-pair-encoding-subword-based-tokenization-algorithm-77828a70bee0，也可参阅Google SentencePiece和OpenAI TikToken。\n\t  - 可在此处查看可视化效果：https:\u002F\u002Flucalp.dev\u002Fbitter-lesson-tokenization-and-blt\u002F\n\t  - 来源见GPT2代码：https:\u002F\u002Fgithub.com\u002Fopenai\u002Fgpt-2\u002Fblob\u002Fmaster\u002Fsrc\u002Fencoder.py\n\t  - 需要注意的是，BPE存在不足之处：https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FdFbfCLZA4pejckeKc\u002Fa-mechanistic-explanation-for-solidgoldmagikarp-like-tokens?commentId=9jNdKscwEWBB4GTCQ\n\t\t  - [\u002F\u002F----------------------------------------------------------------------------------------------------------------是一个GPT-4令牌](https:\u002F\u002Ftwitter.com\u002Fgoodside\u002Fstatus\u002F1753192905844592989)\n\t\t  - [GPT-3.5会在过度思考useRalativeImagePath时崩溃](https:\u002F\u002Fiter.ca\u002Fpost\u002Fgpt-crash\u002F)\n\t\t  - 导致数学和字符串字符相关问题：https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35363769\n\t\t  - 并引发[评估问题](https:\u002F\u002Fx.com\u002Fmain_horse\u002Fstatus\u002F1744560083957411845?s=20)\n\t\t  - [异常令牌](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=39086318)是在分词器使用的数据集与LLM不一致时产生的\n\t\t  - [Karpathy谈为什么分词很复杂](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=zduSFxRajkE)\n\t  - https:\u002F\u002Fplatform.openai.com\u002Ftokenizer 和 https:\u002F\u002Fgithub.com\u002Fopenai\u002Ftiktoken（更最新版本：https:\u002F\u002Ftiktokenizer.vercel.app\u002F）\n\t  - Wordpiece -> BPE -> SentenceTransformer\n\t\t  -  [关于嵌入的初步阅读](https:\u002F\u002Ftowardsdatascience.com\u002Fneural-network-embeddings-explained-4d028e6f0526?gi=ee46baab0d8f)\n\t\t  - https:\u002F\u002Fyoutu.be\u002FQdDoFfkVkcw?si=qefZSDDSpxDNd313\n\t\t-   [Huggingface MTEB基准测试众多嵌入](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmteb)\n\t\t-   [GPT3嵌入的一些显著问题](https:\u002F\u002Ftwitter.com\u002FNils_Reimers\u002Fstatus\u002F1487014195568775173)以及可供考虑的替代方案\n\t  - https:\u002F\u002Fobservablehq.com\u002F@simonw\u002Fgpt-3-token-encoder-decoder\n\t  - Karpathy希望分词最终能被取代：https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1657949234535211009\n\t  - 对于仅解码器而言，无需位置编码：https:\u002F\u002Ftwitter.com\u002Fa_kazemnejad\u002Fstatus\u002F1664277559968927744?s=20\n  - 创造自己的语言 https:\u002F\u002Ftwitter.com\u002Fgiannis_daras\u002Fstatus\u002F1531693104821985280\n  - Google Cloud生成式AI学习路径 https:\u002F\u002Fwww.cloudskillsboost.google\u002Fpaths\u002F118\n  - 图像转图像 https:\u002F\u002Fandys.page\u002Fposts\u002Fhow-to-draw\u002F\n  - 关于语言建模 https:\u002F\u002Flena-voita.github.io\u002Fnlp_course\u002Flanguage_modeling.html，提供通俗但技术性强的语言生成解释，包括从分布中采样以及一些机制层面的可解释性（例如找到跟踪引文状态的神经元）。\n  - 追求照片级真实感 https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx9zmjd\u002Fquest_for_ultimate_photorealism_part_2_colors\u002F\n    - https:\u002F\u002Fmedium.com\u002Fmerzazine\u002Fprompt-design-for-dall-e-photorealism-emulating-reality-6f478df6f186\n  - 设置调整 https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx3k79h\u002Fthe_feeling_of_discovery_sd_is_like_a_great_proc\u002F\n    - 种子选择 https:\u002F\u002Fwww.reddit.com\u002Fr\u002FStableDiffusion\u002Fcomments\u002Fx8szj9\u002Ftutorial_seed_selection_and_the_impact_on_your\u002F\n    - 小参数差异研究（步骤、clamp_max、ETA、cutn_batches等）https:\u002F\u002Ftwitter.com\u002FKyrickYoung\u002Fstatus\u002F1500196286930292742\n    - 生成式AI：一切的自动补全 https:\u002F\u002Fnoahpinion.substack.com\u002Fp\u002Fgenerative-ai-autocomplete-for-everything?sd=pf\n    - [GPT是如何获得其能力的？追溯语言模型的涌现能力来源](https:\u002F\u002Fyaofu.notion.site\u002FHow-does-GPT-Obtain-its-Ability-Tracing-Emergent-Abilities-of-Language-Models-to-their-Sources-b9a57ac0fcf74f30a1ab9e3e36fa1dc1) 一篇优秀的论文，详细记录了GPT系列模型的发展历程及其能力的演变过程。\n- https:\u002F\u002Fbarryz-architecture-of-agentic-llm.notion.site\u002FAlmost-Everything-I-know-about-LLMs-d117ca25d4624199be07e9b0ab356a77\n\n### 高级阅读\n\n- https:\u002F\u002Fgithub.com\u002FMooler0410\u002FLLMsPracticalGuide\n\t- 一份精心整理的重要论文清单\n- https:\u002F\u002Fgithub.com\u002FeleutherAI\u002Fcookbook#the-cookbook Eleuther AI 的训练资源列表。可与 https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Ftuning_playbook 对照参考。\n- 反炒作的 LLM 阅读清单 https:\u002F\u002Fgist.github.com\u002Fveekaybee\u002Fbe375ab33085102f9027853128dc5f0e\n- [OpenAI 的 Jason Wei 推荐的 6 篇论文](https:\u002F\u002Ftwitter.com\u002F_jasonwei\u002Fstatus\u002F1729585618311950445)（[博客](https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Fsome-intuitions-about-large-language-models)）\n\t- GPT-3 论文（https:\u002F\u002Farxiv.org\u002Fabs\u002F2005.14165）\n\t- 思维链提示（https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903）\n\t- 扩缩规律（https:\u002F\u002Farxiv.org\u002Fabs\u002F2001.08361）\n\t- 突现能力（https:\u002F\u002Farxiv.org\u002Fabs\u002F2206.07682）\n\t- 语言模型能够遵循翻转标签和语义无关的标签（https:\u002F\u002Farxiv.org\u002Fabs\u002F2303.03846）\n - [LLM 论文笔记](https:\u002F\u002Fgithub.com\u002Feugeneyan\u002Fllm-paper-notes) - 来自 [Latent Space 论文俱乐部](https:\u002F\u002Fwww.latent.space\u002Fabout#%C2%A7components) 的笔记，作者为 [Eugene Yan](https:\u002F\u002Feugeneyan.com\u002F)\n - [CMU LLM 课程大纲](https:\u002F\u002Fllmsystem.github.io\u002Fllmsystem2024spring\u002Fdocs\u002FSyllabus)\n - 解释 [推理中的批处理](https:\u002F\u002Fwww.seangoedecke.com\u002Finference-batching-and-deepseek\u002F)，与 DeepSeek R1 相关\n- 从头实现 Transformer https:\u002F\u002Fe2eml.school\u002Ftransformers.html\n\t- Transformer 与 LSTM 的对比：https:\u002F\u002Fmedium.com\u002Fanalytics-vidhya\u002Fwhy-are-lstms-struggling-to-matchup-with-transformers-a1cc5b2557e3\n\t- Transformer 代码讲解：https:\u002F\u002Ftwitter.com\u002Fmark_riedl\u002Fstatus\u002F1555188022534176768\n\t- Transformer 家族：https:\u002F\u002Flilianweng.github.io\u002Fposts\u002F2023-01-27-the-transformer-family-v2\u002F\n\t\t- Carmack 的论文列表：https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=34639634\n\t\t- Transformer 模型：介绍与目录：https:\u002F\u002Farxiv.org\u002Fabs\u002F2302.07730\n\t\t- DeepMind：Transformer 的形式化算法：https:\u002F\u002Farxiv.org\u002Fpdf\u002F2207.09238.pdf\n\t- Jay Alammar 的解释\n\t\t- https:\u002F\u002Fjalammar.github.io\u002Fillustrated-transformer\u002F\n\t\t- https:\u002F\u002Fjalammar.github.io\u002Fvisualizing-neural-machine-translation-mechanics-of-seq2seq-models-with-attention\u002F\n- Karpathy 关于 Transformer 的见解\n\t- **收敛**：当前 AI 领域的整合令人惊叹。大约十年前，视觉、语音、自然语言处理、强化学习等领域彼此独立；跨领域阅读论文几乎不可能——方法完全不同，甚至很多并不基于机器学习。到了 2010 年代，这些领域开始转向 1) 机器学习，尤其是 2) 神经网络。尽管架构各异，但至少论文内容开始趋同，都依赖大规模数据集并优化神经网络。然而，近两三年来，各领域的神经网络架构也逐渐趋于一致——都是 Transformer（用约 200 行 PyTorch 代码即可定义 [https:\u002F\u002Fgithub.com\u002Fkarpathy\u002FminGPT\u002Fblob\u002Fmaster\u002Fmingpt\u002Fmodel.py…](https:\u002F\u002Ft.co\u002FxQL5NyJkLE)），仅存在细微差异。无论是作为强大的基线模型，还是（更常见地）最先进的模型。（[推文风暴](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1468370605229547522?s=20))\n\t- **为什么 Transformer 胜出**：Transformer 是一种卓越的神经网络架构，因为它是一种通用的可微分计算机。它同时具备：1) 表达性（前向传播中）2) 可优化性（通过反向传播和梯度下降）3) 高效性（高度并行的计算图）。[推文风暴](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1582807367988654081)\n\t\t- https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1593417989830848512?s=20\n\t\t- 更详细的阐述见 [1 小时斯坦福讲座](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=XfpMkf4rD6E) 和 [8 分钟 Lex Fridman 总结](https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=9uw3F6rndnA)\n\t- [BabyGPT](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1645115622517542913)，使用两个标记 0\u002F1，上下文长度为 3，将其视为有限状态马尔可夫链。它在序列“111101111011110”上训练了 50 次迭代。Transformer 的参数和架构会改变箭头上的概率。\n\t- 从零构建 GPT：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=kCc8FmEb1nY\n\t- 用 60 行代码从零构建不同的 GPT：https:\u002F\u002Fjaykmody.com\u002Fblog\u002Fgpt-from-scratch\u002F\n- [从头实现扩散模型：基于全新理论视角](https:\u002F\u002Fwww.chenyang.co\u002Fdiffusion.html) - 以代码驱动的方式介绍扩散模型\n- [大型语言模型的 137 种突现能力](https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Femergence)\n\t- 突现的少样本提示任务：BIG-Bench 和 MMLU 基准测试\n\t- 突现的提示策略\n\t\t- [指令遵循](https:\u002F\u002Fopenreview.net\u002Fforum?id=gEZrGCozdqR)\n\t\t- [草稿本](https:\u002F\u002Fopenreview.net\u002Fforum?id=iedYJm92o0a)\n\t\t- [利用开放知识库进行事实核查](https:\u002F\u002Farxiv.org\u002Fabs\u002F2112.11446)\n\t\t- [思维链提示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2201.11903)\n\t\t- [可微搜索索引](https:\u002F\u002Farxiv.org\u002Fabs\u002F2202.06991)\n\t\t- [自我一致性](https:\u002F\u002Farxiv.org\u002Fabs\u002F2203.11171)\n\t\t- [在提示中利用解释](https:\u002F\u002Farxiv.org\u002Fabs\u002F2204.02329)\n\t\t- [由简入繁提示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.10625)\n\t\t- [零样本思维链](https:\u002F\u002Farxiv.org\u002Fabs\u002F2205.11916)\n\t\t- [通过 P(真) 进行校准](https:\u002F\u002Farxiv.org\u002Fabs\u002F2207.05221)\n\t\t- [多语言思维链](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.03057)\n\t\t- [随问随答提示](https:\u002F\u002Farxiv.org\u002Fabs\u002F2210.02441)\n\t- 一些质疑——这些能力是海市蜃楼吗？只需避免使用过于严苛的指标即可\n\t\t- https:\u002F\u002Fwww.jasonwei.net\u002Fblog\u002Fcommon-arguments-regarding-emergent-abilities\n\t\t- https:\u002F\u002Fhai.stanford.edu\u002Fnews\u002Fais-ostensible-emergent-abilities-are-mirage\n  - 图片\n\t  - Eugene Yan 对文本到图像技术栈的解释：https:\u002F\u002Feugeneyan.com\u002Fwriting\u002Ftext-to-image\u002F\n\t  - VQGAN\u002FCLIP：https:\u002F\u002Fminimaxir.com\u002F2021\u002F08\u002Fvqgan-clip\u002F\n\t  - 图像生成十年史：https:\u002F\u002Fzentralwerkstatt.org\u002Fblog\u002Ften-years-of-image-synthesis\n\t  - 视觉 Transformer (ViT) 解释：https:\u002F\u002Fwww.pinecone.io\u002Flearn\u002Fvision-transformers\u002F\n  - 负面提示：https:\u002F\u002Fminimaxir.com\u002F2022\u002F11\u002Fstable-diffusion-negative-prompt\u002F\n  - 2022 年最佳论文：https:\u002F\u002Fwww.yitay.net\u002Fblog\u002F2022-best-nlp-papers\n  - [大型生成模型中的可预测性与惊喜](https:\u002F\u002Farxiv.org\u002Fpdf\u002F2202.07785.pdf) - 一篇关于目前我们对 LLM 扩展性、能力及发展现状的优秀综述论文\n- 更多提示工程相关论文：https:\u002F\u002Fgithub.com\u002Fdair-ai\u002FPrompt-Engineering-Guide\n- https:\u002F\u002Fcreator.nightcafe.studio\u002Fvqgan-clip-keyword-modifier-comparison VQGAN+CLIP 关键词修饰符比较\n- Transformer 的历史\n\t- Richard Socher 谈论其对注意力机制的贡献，以及如何最终促成 Transformer 的诞生：https:\u002F\u002Fovercast.fm\u002F+r1P4nKfFU\u002F1:00:00\n\t- https:\u002F\u002Fkipp.ly\u002Fblog\u002Ftransformer-taxonomy\u002F 这份文档是我为想了解 AI 的人整理的持续更新的文献综述，涵盖了 22 种模型、11 种架构改进、7 种预训练后技术以及 3 种训练技术（还有 5 种不属于上述任何一类的内容）。\n\t- [理解大型语言模型：快速掌握最相关文献的横断面](https:\u002F\u002Fmagazine.sebastianraschka.com\u002Fp\u002Funderstanding-large-language-models)\n\t\t- 特别提到 Bandana et al (2014)，他们被认为首次提出在 token 得分上应用 Softmax 函数来计算注意力，为 Vaswani et al (2017) 提出的原始 Transformer 奠定了基础。https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35589756\n\t- https:\u002F\u002Ffinbarrtimbers.substack.com\u002Fp\u002Ffive-years-of-progress-in-gpts GPT1\u002F2\u002F3、Megatron、Gopher、Chinchilla、PaLM、LLaMa\n\t- 一篇优秀的总结论文（8 个要点）：https:\u002F\u002Fcims.nyu.edu\u002F~sbowman\u002Feightthings.pdf\n- [HuggingFace MOE 解释](https:\u002F\u002Fhuggingface.co\u002Fblog\u002Fmoe)\n- https:\u002F\u002Fblog.alexalemi.com\u002Fkl-is-all-you-need.html\n\n我们使用相同的提示和初始图像比较了126个关键词修饰符。以下是结果。\n  - https:\u002F\u002Fcreator.nightcafe.studio\u002Fcollection\u002F8dMYgKm1eVXG7z9pV23W\n- Google发布了PartiPrompts作为基准测试：https:\u002F\u002Fparti.research.google\u002F “PartiPrompts (P2) 是一套丰富的超过1600条英文提示，我们在此工作中一并发布。P2可用于衡量模型在各类别和挑战性方面的性能。”\n- 视频教程\n  - 像素艺术 https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=UvJkQPtr-8s&feature=youtu.be\n- 论文发展史\n\t- 2008年：NLP统一架构（Collobert-Weston） https:\u002F\u002Ftwitter.com\u002Fylecun\u002Fstatus\u002F1611921657802768384\n\t- 2015年：[半监督序列学习](https:\u002F\u002Farxiv.org\u002Fabs\u002F1511.01432) https:\u002F\u002Ftwitter.com\u002Fdeliprao\u002Fstatus\u002F1611896130589057025?s=20\n\t- 2017年：Transformer（Vaswani等）\n\t- 2018年：GPT（Radford等）\n\t- \n- 杂项\n  - StabilityAI首席信息官的观点 https:\u002F\u002Fdanieljeffries.substack.com\u002Fp\u002Fthe-turning-point-for-truly-open?sd=pf\n  - https:\u002F\u002Fgithub.com\u002Fawesome-stable-diffusion\u002Fawesome-stable-diffusion\n  - https:\u002F\u002Fgithub.com\u002Fmicrosoft\u002FLMOps 关于微软提示研究的指南\n  - gwern关于Bing、GPT4以及微软与OpenAI关系的幕后讨论 https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FjtoPawEhLNXNxvgTT\u002Fbing-chat-is-blatantly-aggressively-misaligned\n\n\n\n### 其他类似列表\n\n- https:\u002F\u002Fgist.github.com\u002Frain-1\u002Feebd5e5eb2784feecf450324e3341c8d\n- https:\u002F\u002Fgithub.com\u002Funderlines\u002Fawesome-marketing-datascience\u002Fblob\u002Fmaster\u002Fawesome-ai.md#llama-models\n- https:\u002F\u002Fgithub.com\u002Fimaurer\u002Fawesome-decentralized-llm\n\n## 社区\n\n- Discord（参见https:\u002F\u002Fbuttondown.email\u002Fainews 获取每日邮件摘要，实时更新）\n\t- [Latent Space Discord](https:\u002F\u002Fdiscord.gg\u002FxJJMRaWCRt)（我们的！）\n\t- 通用黑客与学习\n\t\t- [ChatGPT Hackers Discord](https:\u002F\u002Fwww.chatgpthackers.dev\u002F)\n\t\t- [Alignment Lab AI Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fk36qjUxyJC)\n\t\t- [Nous Research Discord]([https:\u002F\u002Fdiscord.gg\u002FT3kTZfYzs6](https:\u002F\u002Ft.co\u002FD3omqAxP04))\n\t\t- [DiscoLM Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002FvGRFMnS6c2)\n\t\t- [Karpathy Discord](https:\u002F\u002Fdiscord.gg\u002F3zy8kqD9Cp)（已不活跃）\n\t\t- [HuggingFace Discord](https:\u002F\u002Fdiscuss.huggingface.co\u002Ft\u002Fjoin-the-hugging-face-discord\u002F11263)\n\t\t- [Skunkworks AI Discord](https:\u002F\u002Fdiscord.gg\u002F3Sfmpd3Njt)（新）\n\t\t- [Jeff Wang\u002FLLM性能爱好者Discord](https:\u002F\u002Ftwitter.com\u002Fwangzjeff)\n\t\t- [CUDA Mode（Mark Saroufim）](https:\u002F\u002Fdiscord.com\u002Finvite\u002FWu4pdW8QqM)，可参阅[Youtube](https:\u002F\u002Fwww.youtube.com\u002F@CUDAMODE)和[GitHub](https:\u002F\u002Fgithub.com\u002Fcuda-mode)\n\t- 艺术\n\t\t- [StableDiffusion Discord](https:\u002F\u002Fdiscord.com\u002Finvite\u002Fstablediffusion) \n\t\t- Deforum Discord https:\u002F\u002Fdiscord.gg\u002FupmXXsrwZc\n\t\t- Lexica Discord https:\u002F\u002Fdiscord.com\u002Finvite\u002FbMHBjJ9wRh\n\t- AI研究\n\t\t- LAION Discord https:\u002F\u002Fdiscord.gg\u002FxBPBXfcFHd\n\t\t- Eleuther Discord：https:\u002F\u002Fwww.eleuther.ai\u002Fget-involved\u002F（入门指南：https:\u002F\u002Fblog.eleuther.ai\u002Fyear-one\u002F）\n\t- 各类初创公司\n\t\t- Perplexity Discord https:\u002F\u002Fdiscord.com\u002Finvite\u002FkWJZsxPDuX\n\t\t- Midjourney的Discord\n\t\t  - 如何使用Midjourney v4 https:\u002F\u002Ftwitter.com\u002Ffabianstelzer\u002Fstatus\u002F1588856386540417024?s=20&t=PlgLuGAEEds9HwfegVRrpg\n- https:\u002F\u002Fstablehorde.net\u002F\n\t- 代理\n\t\t- AutoGPT Discord\n\t\t- BabyAGI Discord\n- Reddit\n\t- https:\u002F\u002Freddit.com\u002Fr\u002FstableDiffusion\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002FLocalLLaMA\u002F\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002Fbing\n\t- https:\u002F\u002Fwww.reddit.com\u002Fr\u002Fopenai\n\n\n## 人物\n\n> *许多人并不知道，如今越来越多的前沿信息已经不在Arxiv上，来源包括但不限于：https:\u002F\u002Fgithub.com\u002Ftrending、HN、那些小众的Discord服务器、X上的动漫头像匿名用户、Reddit *- [K](https:\u002F\u002Ftwitter.com\u002Fkarpathy\u002Fstatus\u002F1733968385472704548)\n\n这份列表可能会过时，但可以作为起点。我实时关注的人物列表位于：https:\u002F\u002Ftwitter.com\u002Fi\u002Flists\u002F1585430245762441216\n\n- 研究人员\u002F开发者\n  - https:\u002F\u002Ftwitter.com\u002F_jasonwei\n  - https:\u002F\u002Ftwitter.com\u002Fjohnowhitaker\u002Fstatus\u002F1565710033463156739\n  - https:\u002F\u002Ftwitter.com\u002Faltryne\u002Fstatus\u002F1564671546341425157\n  - https:\u002F\u002Ftwitter.com\u002FSchmidhuberAI\n  - https:\u002F\u002Ftwitter.com\u002Fnearcyan\n  - https:\u002F\u002Ftwitter.com\u002Fkarinanguyen_\n  - https:\u002F\u002Ftwitter.com\u002Fabhi_venigalla\n  - https:\u002F\u002Ftwitter.com\u002Fadvadnoun\n  - https:\u002F\u002Ftwitter.com\u002Fpolynoamial\n  - https:\u002F\u002Ftwitter.com\u002Fvovahimself\n  - https:\u002F\u002Ftwitter.com\u002Fsarahookr\n  - https:\u002F\u002Ftwitter.com\u002FshaneguML\n  - https:\u002F\u002Ftwitter.com\u002FMaartenSap\n  - https:\u002F\u002Ftwitter.com\u002FethanCaballero\n  - https:\u002F\u002Ftwitter.com\u002FShayneRedford\n  - https:\u002F\u002Ftwitter.com\u002Fseb_ruder\n  - https:\u002F\u002Ftwitter.com\u002Frasbt\n  - https:\u002F\u002Ftwitter.com\u002Fwightmanr\n  - https:\u002F\u002Ftwitter.com\u002FGaryMarcus\n  - https:\u002F\u002Ftwitter.com\u002Fylecun\n  - https:\u002F\u002Ftwitter.com\u002Fkarpathy\n  - https:\u002F\u002Ftwitter.com\u002Fpirroh\n  - https:\u002F\u002Ftwitter.com\u002Feerac\n  - https:\u002F\u002Ftwitter.com\u002Fteknium\n  - https:\u002F\u002Ftwitter.com\u002Falignment_lab\n  - https:\u002F\u002Ftwitter.com\u002Fpicocreator\n  - https:\u002F\u002Ftwitter.com\u002Fcharlespacker\n  - https:\u002F\u002Ftwitter.com\u002Fldjconfirmed\n  - https:\u002F\u002Ftwitter.com\u002Fnisten\n  - https:\u002F\u002Ftwitter.com\u002Ffar__el\n  - https:\u002F\u002Ftwitter.com\u002Fi\u002Flists\u002F1713824630241202630\n- 新闻\u002F聚合者\n  - https:\u002F\u002Ftwitter.com\u002Fai__pub\n  - https:\u002F\u002Ftwitter.com\u002FWeirdStableAI\n  - https:\u002F\u002Ftwitter.com\u002Fmultimodalart\n  - https:\u002F\u002Ftwitter.com\u002FLastWeekinAI\n  - https:\u002F\u002Ftwitter.com\u002Fpaperswithcode\n  - https:\u002F\u002Ftwitter.com\u002FDeepLearningAI_\n  - https:\u002F\u002Ftwitter.com\u002Fdl_weekly\n  - https:\u002F\u002Ftwitter.com\u002FslashML\n  - https:\u002F\u002Ftwitter.com\u002F_akhaliq\n  - https:\u002F\u002Ftwitter.com\u002Faaditya_ai\n  - https:\u002F\u002Ftwitter.com\u002Fbentossell\n  - https:\u002F\u002Ftwitter.com\u002Fjohnvmcdonnell\n- 创始人\u002F建设者\u002F风投\n  - https:\u002F\u002Ftwitter.com\u002Flevelsio\n  - https:\u002F\u002Ftwitter.com\u002Fgoodside\n  - https:\u002F\u002Ftwitter.com\u002Fc_valenzuelab\n  - https:\u002F\u002Ftwitter.com\u002FRaza_Habib496\n  - https:\u002F\u002Ftwitter.com\u002Fsharifshameem\u002Fstatus\u002F1562455690714775552\n  - https:\u002F\u002Ftwitter.com\u002Fgenekogan\u002Fstatus\u002F1555184488606564353\n  - https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1566069427501764613?s=20&t=camPsWtMHdSSEHqWd0K7Ig\n  - https:\u002F\u002Ftwitter.com\u002Famanrsanger\n  - https:\u002F\u002Ftwitter.com\u002Fctjlewis\n  - https:\u002F\u002Ftwitter.com\u002Fsarahcat21\n  - https:\u002F\u002Ftwitter.com\u002FjackclarkSF\n  - https:\u002F\u002Ftwitter.com\u002Falexandr_wang\n  - https:\u002F\u002Ftwitter.com\u002Frameerez\n  - https:\u002F\u002Ftwitter.com\u002Fscottastevenson\n  - https:\u002F\u002Ftwitter.com\u002Fdenisyarats\n- 稳定性相关\n  - https:\u002F\u002Ftwitter.com\u002FStabilityAI\n  - https:\u002F\u002Ftwitter.com\u002FStableDiffusion\n  - https:\u002F\u002Ftwitter.com\u002Fhardmaru\n  - https:\u002F\u002Ftwitter.com\u002FJJitsev\n- OpenAI\n  - https:\u002F\u002Ftwitter.com\u002Fsama\n  - https:\u002F\u002Ftwitter.com\u002Filyasut\n  - https:\u002F\u002Ftwitter.com\u002Fmiramurati\n- HuggingFace\n  - https:\u002F\u002Ftwitter.com\u002Fyounesbelkada\n- 艺术家\n  - https:\u002F\u002Ftwitter.com\u002Fkarenxcheng\u002Fstatus\u002F1564626773001719813\n  - https:\u002F\u002Ftwitter.com\u002FTomLikesRobots\n- 其他\n  - 公司\n    - https:\u002F\u002Ftwitter.com\u002FAnthropicAI\n    - https:\u002F\u002Ftwitter.com\u002FAssemblyAI\n    - https:\u002F\u002Ftwitter.com\u002FCohereAI\n    - https:\u002F\u002Ftwitter.com\u002FMosaicML\n    - https:\u002F\u002Ftwitter.com\u002FMetaAI\n    - https:\u002F\u002Ftwitter.com\u002FDeepMind\n    - https:\u002F\u002Ftwitter.com\u002FHelloPaperspace\n- 机器人与应用\n  - https:\u002F\u002Ftwitter.com\u002Fdreamtweetapp\n  - https:\u002F\u002Ftwitter.com\u002Faiarteveryhour\n\n## 引言、現實與降溫論\n\n- 狹隘、枯燥的領域用例 https:\u002F\u002Ftwitter.com\u002FWillManidis\u002Fstatus\u002F1584900092615528448 和 https:\u002F\u002Ftwitter.com\u002FWillManidis\u002Fstatus\u002F1584900100480192516\n- 反炒作 https:\u002F\u002Ftwitter.com\u002Falexandr_wang\u002Fstatus\u002F1573302977418387457\n- 反炒作 https:\u002F\u002Ftwitter.com\u002Ffchollet\u002Fstatus\u002F1612142423425138688?s=46&t=pLCNW9pF-co4bn08QQVaUg\n- 提示工程相關迷因\n\t- https:\u002F\u002Ftwitter.com\u002F_jasonwei\u002Fstatus\u002F1516844920367054848\n- Stable Diffusion 遇到的困難 https:\u002F\u002Fopguides.info\u002Fposts\u002Faiartpanic\u002F\n- 新版 Google\n  -  https:\u002F\u002Ftwitter.com\u002Falexandr_wang\u002Fstatus\u002F1585022891594510336\n- 新版 PowerPoint\n  - 由 emad 提及\n- UI 中默認追加提示詞\n  -  DALL·E: https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1588588688115912705?s=20&t=0ojpGmH9k6MiEDyVG2I6gg\n- 此前曾歷經兩次寒冬，分別是 1974–1980 年和 1987–1993 年。https:\u002F\u002Fwww.erichgrunewald.com\u002Fposts\u002Fthe-prospect-of-an-ai-winter\u002F。更多評論請見 [這裡](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=37474528)。相關概念——[AI 效應](https:\u002F\u002Fwww.sequoiacap.com\u002Farticle\u002Fai-paradox-perspective\u002F)——“一旦它能運作，就不再是 AI 了”。\n- 不過就是矩陣乘法\u002F隨機鸚鵡模型而已\n\t- 連 LLM 怀疑论者 Yann LeCun 也認為 LLM 具備一定理解能力：https:\u002F\u002Ftwitter.com\u002Fylecun\u002Fstatus\u002F1667947166764023808\n\t- Gary Marcus 的文章《深度學習正遭遇瓶頸》https:\u002F\u002Fnautil.us\u002Fdeep-learning-is-hitting-a-wall-238440\u002F 主張推動符號系統發展。\n- “過來人”反炒作派→憂慮者\n\t- https:\u002F\u002Fadamkarvonen.github.io\u002Fmachine_learning\u002F2024\u002F03\u002F20\u002Fchess-gpt-interventions.html#next-token-predictors\n\n\n## 法律、倫理與隱私\n\n- 不適宜內容過濾器 https:\u002F\u002Fvickiboykis.com\u002F2022\u002F11\u002F18\u002Fsome-notes-on-the-stable-diffusion-safety-filter\u002F\n- 論“AI藝術恐慌” https:\u002F\u002Fopguides.info\u002Fposts\u002Faiartpanic\u002F\n\t- [我透過 Midjourney 損失了所有讓我熱愛工作的東西] (https:\u002F\u002Fold.reddit.com\u002Fr\u002Fblender\u002Fcomments\u002F121lhfq\u002Fi_lost_everything_that_made_me_love_my_job\u002F)\n\t- [Midjourney 藝術家名單] (https:\u002F\u002Fwww.theartnewspaper.com\u002F2024\u002F01\u002F04\u002Fleaked-names-of-16000-artists-used-to-train-midjourney-ai#)\n- Yannick 對 OPENRAIL-M 的影響 https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=W5M-dvzpzSQ\n- 接受 AI 美術作品的藝術院校 https:\u002F\u002Ftwitter.com\u002FDaveRogenmoser\u002Fstatus\u002F1597746558145265664\n- DRM 問題 https:\u002F\u002Fundeleted.ronsor.com\u002Fvoice.ai-gpl-violations-with-a-side-of-drm\u002F\n- 偷竊藝術作品 [https:\u002F\u002Fstablediffusionlitigation.com](https:\u002F\u002Fstablediffusionlitigation.com\u002F)\n\t- http:\u002F\u002Fwww.stablediffusionfrivolous.com\u002F\n\t- Stable Diffusion 的歸屬問題 https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=34670136\n\t- 迪士尼方面的反駁意見 https:\u002F\u002Ftwitter.com\u002Fjonst0kes\u002Fstatus\u002F1616219435492163584?s=46&t=HqQqDH1yEwhWUSQxYTmF8w\n\t- 關於 Stable Diffusion 翻譯抄襲的研究 https:\u002F\u002Ftwitter.com\u002Fofficialzhvng\u002Fstatus\u002F1620535905298817024?s=20&t=NC-nW7pfDa8nyRD08Lx1Nw。這篇論文使用 Stable Diffusion 根據 35 萬個提示生成了 1.75 億張圖像，結果只發現 109 張與訓練數據極為相似的複製品。我的主要結論是否應該是：Stable Diffusion 在避免記憶訓練樣本方面表現得非常出色？\n- 網頁內容抓取\n\t- https:\u002F\u002Fblog.ericgoldman.org\u002Farchives\u002F2023\u002F08\u002Fweb-scraping-for-me-but-not-for-thee-guest-blog-post.htm\n\t- Sarah Silverman 案件——OpenAI 的回應 https:\u002F\u002Farstechnica.com\u002Ftech-policy\u002F2023\u002F08\u002Fopenai-disputes-authors-claims-that-every-chatgpt-response-is-a-derivative-work\u002F\n\t- OpenAI 的回應\n- 授權許可\n\t- [AI 模型參數並非開放“源代碼”——Sid Sijbrandij] (https:\u002F\u002Fopencoreventures.com\u002Fblog\u002F2023-06-27-ai-weights-are-not-open-source\u002F)\n- 多樣性與公平性\n\t- 將少數族裔性化 https:\u002F\u002Ftwitter.com\u002Flanadenina\u002Fstatus\u002F1680238883206832129，原因在於“色情內容擅長處理人體形象” https:\u002F\u002Ftwitter.com\u002Flevelsio\u002Fstatus\u002F1680665706235404288\n\t- [OpenAI 為了讓 DallE 更具多樣性，隨意加上“黑人”字樣] (https:\u002F\u002Ftwitter.com\u002Frzhang88\u002Fstatus\u002F1549472829304741888?s=20)\n- 隱私——保密計算 https:\u002F\u002Fwww.edgeless.systems\u002Fblog\u002Fhow-confidential-computing-and-ai-fit-together\u002F\n- AI 取代工作 https:\u002F\u002Fdonaldclarkplanb.blogspot.com\u002F2024\u002F02\u002Fthis-is-why-idea-that-ai-will-just.html\n\n## 对齐、安全\n\n- Anthropic - https:\u002F\u002Farxiv.org\u002Fpdf\u002F2112.00861.pdf\n\t- 有益：尝试完成用户请求。简洁高效，适时追问，引导偏离主题的问题。\n\t- 诚实：提供准确信息，坦诚表达不确定性。若自身不具备相应能力或知识，则不模仿专家的回答。\n\t- 无害：避免冒犯或歧视性内容。拒绝协助任何危险行为。能够识别敏感或具有重大影响的建议。\n\t- 关于批评与界限的未来方向：https:\u002F\u002Ftwitter.com\u002Fdavidad\u002Fstatus\u002F1628489924235206657?s=46&t=TPVwcoqO8qkc7MuaWiNcnw\n- 埃利泽·尤德科夫斯基的全部著作\n\t- https:\u002F\u002Ftwitter.com\u002Fesyudkowsky\u002Fstatus\u002F1625922986590212096\n\t- AGI 致命性列表：https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FuMQ3cqWDPHhjtiesc\u002Fagi-ruin-a-list-of-lethalities\n\t- 需要注意的是，埃利泽过去及近期都曾发表过颇具争议的言论（[过去](https:\u002F\u002Ftwitter.com\u002Fjohnnysands42\u002Fstatus\u002F1641349759754485760?s=46&t=90xQ8sGy63D2OtiaoGJuww) 及 [最近](https:\u002F\u002Ftwitter.com\u002Florakolodny\u002Fstatus\u002F1641448759086415875?s=46&t=90xQ8sGy63D2OtiaoGJuww)），相关讨论也见于 [TIME 文章](https:\u002F\u002Ftime.com\u002F6266923\u002Fai-eliezer-yudkowsky-open-letter-not-enough\u002F)。\n- 康纳·利希可能是尤德科夫斯基更为理性、审慎且技术更扎实的版本：https:\u002F\u002Fovercast.fm\u002F+aYlOEqTJ0\n\t- 危险并不局限于“回形针工厂”模型。\n\t- https:\u002F\u002Fwww.lesswrong.com\u002Fposts\u002FHBxe6wdjxK239zajf\u002Fwhat-failure-looks-like\n- 六个月暂停实验的公开信\n\t- https:\u002F\u002Ffutureoflife.org\u002Fopen-letter\u002Fpause-giant-ai-experiments\u002F\n\t- 扬·勒丘恩与吴恩达的辩论：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=BY9KV8uCtj4\n\t- 斯科特·阿伦森的相关博文：https:\u002F\u002Fscottaaronson.blog\u002F?p=7174\n\t- 艾米丽·本德的回应：https:\u002F\u002Ftwitter.com\u002Femilymbender\u002Fstatus\u002F1640920936600997889\n\t- 杰弗里·辛顿离开谷歌：https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=35771104\n\t- 随后发表了一封简短的公开信：https:\u002F\u002Fwww.nytimes.com\u002F2023\u002F05\u002F30\u002Ftechnology\u002Fai-threat-warning.html\n- X风险\n\t- 避免 AI 引发的灭绝危机，是否真的如此紧迫？([链接](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002F5J8WPrGlKFK1BUsRYoWIfdCHPD-3Xbi8FugDN8_LxoMLoHhMJlEG7wG6Qm_xTk5kjhv7y5vwidMdRiSXu8XoBiq8nEOR34GaAFwHPM3qm-KgbLw6_hl3AQd9rRxt7mbTHvXRNeF6hfODzGg5z4t8D3ZdIldVTpoAGQ-KmKNEnmzBudTJIJtP1kjZLr1QqJYX\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh13\u002FXrV7_YgyheO615JC1X8VasmPENc7KRnJrp03iAlmoXw))\n\t- AI 并非军备竞赛。([链接](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002FznicDlvJFyGBhcMAVWxZFpwlt5VC0YnUsV4gzm_4ut3qiUuoiY9_n0aSS6Uv0inD2_kx5JhKOVXSRbXMrV7VwL_fuIMlfwAiTSTTCxo56Xv58IWHdUClCfyt4alUnKRf2MV5a7rIM0KG4vwVLObEua0i3t5UIvPlbHybyFluj52xGYswNiQUMZl2OrDzh1u4oLAvnCVkTUi5vCX0i6-N8A\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh14\u002FK2LmS7FyAGW-u4j6oHnp_bKapwqFG_Gb4MC5XPpKJsM)))\n\t- 如果我们要将 AI 定义为“灭绝风险”，就必须明确其具体实现方式。([链接](https:\u002F\u002Flink.mail.beehiiv.com\u002Fss\u002Fc\u002FznicDlvJFyGBhcMAVWxZFsLJphRoW5fZiwv4ALj3pNMBRHKVGkJIME1sXnwK-P46O3jH_jtoC_wqyCeroi2bRUKEUKd_QQvXSoMgu3Nqbw99wsPjSDl_Lt6RSk7bni0KT4c1-gstNpWdPoUbj3air5NbOAbvtp5P9ds1xCm4qG-6dvoJELH0HHB7G9FO2ZFlXPTm37nswLD77q6opSiWnrTEHhHsCo37yO01bFol4LeaSr8F4e_WynvF0QrKLNaSKf0rDpyMSn__lxmbRl6M1A\u002F3wo\u002Fz-oFlqV_RUGtJd6OO2FogA\u002Fh15\u002FSYpE89X1W3Z_qSjH8YJmhLYYRRgjUHJzn2WILhBIcxw)))\n- OpenAI 超对齐计划：https:\u002F\u002Fwww.youtube.com\u002Fwatch?v=ZP_N4q5U3eE\n\n### 监管\n- 中国监管政策：https:\u002F\u002Fwww.chinalawtranslate.com\u002Fen\u002Foverview-of-draft-measures-on-generative-ai\u002F\n\t- https:\u002F\u002Ftwitter.com\u002Fmmitchell_ai\u002Fstatus\u002F1647697067006111745?s=46&t=90xQ8sGy63D2OtiaoGJuww\n\t- 中国是全球主要大国中唯一一个明确[监管]生成式 AI 的国家。\n- 意大利禁止 ChatGPT\n-   在日本举行的年度会议上，由工业化民主国家组成的非正式组织七国集团（G7）[宣布](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s_3q3nJV1-WJV7CgFv5W51g32V2hBgR-N3j2W3szNMJlW80w4Xv5Gg2S8N4_ZHQFYd4cRW8yvm4F2zg5qpW5xfrS61fJ8H4W49Nj5Y2zWcRbW97ym606Vq3X6W2-51W529GnLcW2zlMRl3qKmBCW8jd69B7nRzmFV5K0lP4FzrchW6nxHbj1vFJPqN3sbnlvFM2WhW6PNj-t5YfVS3W6pl7681yBKGxN1R1Mbj8wWj4W22BS_g1BH_1yW7pT8c47QKBQFW64WfHc80PxjRV6dQN42mCqRMW3yJrxC3DX4_5W5yqFbL34kwc0W770qZv2fjyv03bJQ1)了广岛进程——一个被赋予调查生成式 AI 风险职权的政府间工作组。G7 成员国包括加拿大、法国、德国、意大利、日本、英国和美国，承诺制定相互兼容的法律，并依据民主价值观来监管 AI，这些价值观包括公平、问责制、透明度、安全性、数据隐私、防止滥用以及尊重人权。\n-   美国总统乔·拜登[发布](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s55nCT_V3Zsc37CgQX9W7wTfL38m-2KKW3mGNtx8sgMgJW10rjg65dMw5qN3jtZLMqRgQbV_3DXH2yr2HbW4vs2Tm43thGvW6fK8f72N6w37N53TdBst-8D1W6yzHrb70MHkTW1ckbRd5NfDP9W2j6yWK34KFvtW18lscs3lQ0G6W4GFgyx486-vdW5NJBQv4tvxYpW36FqGc4md2XfW2Fgj6n2fd-BSW3PyPVH9bD8W3N61PDTSyzVy1W2QSSm07tHjwWW8zG-Kl3TPwmfVMNjLb7Nnhk4W2B_zlf7n91mNW806djL3zxyMFW5RpR1Q9kcL0yW7ss_7m92D7Z-W4fWJYk3xBb3yN5bZbNkSvb14N2kgsftyLf7cN1WmZDl5Sw63W4FcWFn65g7DsVzPJZP2qtH36W3vfw782XRtSbW834rhB5jGZ7RW6K9z1d87ns4N38SY1)了一份关于 AI 的战略计划。该倡议呼吁美国监管机构开发用于训练、衡量和评估 AI 系统的公共数据集、基准测试和标准。\n-   本月早些时候，法国数据隐私监管机构[宣布](https:\u002F\u002Finfo.deeplearning.ai\u002Fe3t\u002FCtc\u002FLX+113\u002FcJhC404\u002FVVt-xv1bfQFxVTDC381T3c1vVLsY3L4_gp1wN1FQ0s_3q3nJV1-WJV7CgTpxW8C6yq247bfj8W4mQv0-4hl35_W8SPtZ52JXPlxW1Fkb5p54f30RW6sj0m71XsJ4yF7-b6kBx5vTW7cwGKJ6RcqpFW5325sQ2R54VbW79rbsP4wh6MyW2MwyS_6CSJfwW8VBz1y1M5_4nW2nhxPD5vZw17MCVDrTvH8ljW1JYH0t8DPm23W3BPQvW69f5TFW5ms3_413vDbJVw9GyW1yMYBfW6zpGVw12swbdV_wmsh11rtb0Vlzk0b6ZkhpZW1XWkdG7yNYpsW38p95C5jXCx7W4qrc4w1_q_sdW5RD3Jv7bdxpv2Gp1)了一项针对生成式 AI 的监管框架。\n- 监管与 X风险的关系：https:\u002F\u002F1a3orn.com\u002Fsub\u002Fessays-regulation-stories.html\n- [GPT-4V 中的多模态提示注入攻击](https:\u002F\u002Fnews.ycombinator.com\u002Fitem?id=37877605)\n\n## 杂项\n\n- Whisper\n  - https:\u002F\u002Fhuggingface.co\u002Fspaces\u002Fsensahin\u002FYouWhisper YouWhisper 使用 openai\u002Fwhisper 将 YouTube 视频转换为文本。\n  - https:\u002F\u002Ftwitter.com\u002Fjeffistyping\u002Fstatus\u002F1573145140205846528 YouTube Whisperer\n  - 多语言字幕 https:\u002F\u002Ftwitter.com\u002F1littlecoder\u002Fstatus\u002F1573030143848722433\n  - 视频字幕 https:\u002F\u002Ftwitter.com\u002Fm1guelpf\u002Fstatus\u002F1574929980207034375\n  - 你可以将 Whisper 与 Stable Diffusion 结合使用，原因见 https:\u002F\u002Ftwitter.com\u002Ffffiloni\u002Fstatus\u002F1573733520765247488\u002Fphoto\u002F1\n  - 已知问题 https:\u002F\u002Ftwitter.com\u002Flunixbochs\u002Fstatus\u002F1574848899897884672（极端情况下可能导致灾难性失败）\n- 基于文本的音频生成 https:\u002F\u002Ftwitter.com\u002FFelixKreuk\u002Fstatus\u002F1575846953333579776\n- Codegen\n  - CodegeeX https:\u002F\u002Ftwitter.com\u002Fthukeg\u002Fstatus\u002F1572218413694726144\n  - https:\u002F\u002Fgithub.com\u002Fsalesforce\u002FCodeGen https:\u002F\u002Fjoel.tools\u002Fcodegen\u002F\n- PDF 转结构化数据 - Impira 曾经用它来实现（已失效链接：https:\u002F\u002Fwww.impira.com\u002Fblog\u002Fhey-machine-whats-my-invoice-total），但如果在 Twitter 上仔细搜索，还是能找到一些替代方案。\n- 文本到人体运动扩散模型 https:\u002F\u002Ftwitter.com\u002FGuyTvt\u002Fstatus\u002F1577947409551851520\n  - 论文摘要：https:\u002F\u002Farxiv.org\u002Fabs\u002F2209.14916\n  - 项目页面：https:\u002F\u002Fguytevet.github.io\u002Fmdm-page\u002F","# AI Notes 快速上手指南\n\n**工具简介**：\n`ai-notes` 并非一个可执行的软件包或框架，而是一个由社区维护的**开源知识库**。它汇集了关于生成式 AI、大语言模型（LLM）、AI 基础设施、图像\u002F音频生成等领域的最新状态、前沿论文、实用案例及工程笔记。本指南将帮助开发者快速访问并利用这些高质量资源。\n\n## 环境准备\n\n由于本项目本质上是 Markdown 文档集合，无需复杂的系统环境或依赖库。\n\n*   **操作系统**：任意支持 Git 的系统（Windows, macOS, Linux）。\n*   **前置依赖**：\n    *   `git`：用于克隆仓库。\n    *   现代浏览器或 Markdown 编辑器（如 VS Code, Obsidian）：用于阅读整理后的笔记。\n*   **网络要求**：\n    *   由于原始仓库托管在 GitHub，国内访问可能受限。建议配置代理或使用国内代码托管平台（如 Gitee）的镜像源（如有）。\n\n## 安装步骤\n\n### 方式一：克隆仓库（推荐，适合本地查阅与贡献）\n\n打开终端，执行以下命令将仓库克隆到本地：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fai-notes.git\ncd ai-notes\n```\n\n> **提示**：如果 GitHub 访问缓慢，可尝试搜索 Gitee 上的同步镜像，或使用 `git clone` 时配置加速代理。\n\n### 方式二：在线浏览\n\n直接访问 GitHub 仓库页面浏览目录结构：\nhttps:\u002F\u002Fgithub.com\u002Fsw-yx\u002Fai-notes\n\n## 基本使用\n\n克隆完成后，核心内容位于根目录下的各个 `.md` 文件中。请根据您的需求选择对应的文件进行阅读。\n\n### 1. 查看核心分类笔记\n\n项目按技术领域划分了主要文档，使用文本编辑器或命令行查看：\n\n*   **文本生成与大模型 (Text & LLMs)**:\n    ```bash\n    # 查看 GPT-4 及聊天机器人相关笔记\n    cat TEXT.md\n    \n    # 查看提示词工程 (Prompt Engineering) 案例\n    cat TEXT_PROMPTS.md\n    \n    # 查看语义搜索相关信息\n    cat TEXT_SEARCH.md\n    ```\n\n*   **图像生成 (Image Generation)**:\n    ```bash\n    # 查看 Stable Diffusion, Midjourney, DALL-E 深度笔记\n    cat IMAGE_GEN.md\n    \n    # 查看优质图像提示词合集\n    cat IMAGE_PROMPTS.md\n    ```\n\n*   **代码生成 (Code Generation)**:\n    ```bash\n    # 查看 Copilot 等代码模型笔记\n    cat CODE.md\n    ```\n\n*   **音频与多模态 (Audio & Video)**:\n    ```bash\n    # 查看语音转录、音乐生成及视频生成笔记\n    cat AUDIO.md\n    ```\n\n*   **基础设施 (Infrastructure)**:\n    ```bash\n    # 查看硬件、缩放及 AI 工程化笔记\n    cat INFRA.md\n    ```\n\n### 2. 探索特定应用场景\n\n在 `README.md` 及上述分文件中，包含了大量\"Motivational Use Cases\"（激励性用例），例如：\n\n*   **3D 生成**：参考 `TEXT.md` 或搜索关键词 `text-to-3d` 获取 DreamFusion 等项目链接。\n*   **视频生成**：查阅 `AUDIO.md` 或 `IMAGE_GEN.md` 中关于 `img2video` 和 `Stable Diffusion Videos` 的 Colab 笔记本链接。\n*   **市场地图**：参考 README 中的 \"market maps\u002Flandscapes\" 章节，获取 Sequoia, a16z 等机构发布的最新 AI 栈全景图链接。\n\n### 3. 利用资源链接\n\n文档中包含了大量外部链接（论文、博客、Demo、Colab 笔记本）。\n*   **学习路径**：初学者可优先阅读 `README.md` 中的 \"Beginner Reads\" 部分，包含 Karpathy 的 LLM 入门视频及吴恩达的相关课程链接。\n*   **进阶研究**：查阅 \"Intermediate Reads\" 及 \"State of AI Report\" 获取年度行业报告。\n\n> **注意**：本仓库内容为英文原文。如需中文理解，建议配合浏览器翻译插件或将具体段落输入翻译工具。该仓库的价值在于其 curated（精选）的链接和作者对技术趋势的精辟注释。","一位全栈开发者正试图为新产品集成最新的生成式 AI 功能，需要在短时间内掌握从文本大模型到图像生成的前沿技术栈。\n\n### 没有 ai-notes 时\n- **信息碎片化严重**：开发者需在 Twitter、Hugging Face 和各技术博客间反复跳转，难以系统性地获取关于 Stable Diffusion 或 GPT-4 的最新进展。\n- **提示词调试成本高**：缺乏经过验证的提示词库（Swipe File），每次尝试图像生成或复杂文本任务时都要从零开始摸索，浪费大量时间在无效参数调整上。\n- **基础设施认知盲区**：对 AI 硬件选型、模型缩放规律等底层架构知识（INFRA.md）缺乏清晰指引，导致技术方案设计时容易忽略性能瓶颈。\n- **资源链接失效快**：收藏的教程和工具链接随时间推移迅速过时或失效，无法找到长期稳定的权威参考源（Canonical References）。\n\n### 使用 ai-notes 后\n- **知识体系结构化**：直接通过 `IMAGE_GEN.md` 和 `TEXT.md` 等分类文件，快速获取按领域整理好的状态-of-the-art 综述，建立完整的技术地图。\n- **即拿即用的提示词库**：参考 `IMAGE_PROMPTS.md` 和 `TEXT_PROMPTS.md` 中精选的高质量案例，大幅缩短原型开发周期，快速产出可用结果。\n- **工程落地有依据**：查阅 `INFRA.md` 中的原始笔记，清晰理解算力需求与扩展策略，避免在架构设计阶段犯低级错误。\n- **永久有效的资源索引**：利用 `\u002FResources` 文件夹中清洗过的规范引用，确保团队内部共享的学习资料和技术文档长期可靠、随时可查。\n\nai-notes 将散乱的 AI 前沿资讯转化为软件工程师可立即执行的结构化工程资产，显著降低了新技术的学习与落地门槛。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fswyxio_ai-notes_662f0776.png","swyxio","swyx.io","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fswyxio_ccdd2a9d.jpg","ai\u002Fsoftware 3.0: https:\u002F\u002Flatent.space\u002F\r\n\r\ndevrel\u002Fdevtools: https:\u002F\u002Fdx.tips\u002F\r\n\r\nblog: https:\u002F\u002Fswyx.io\u002Fideas\r\n\r\nadvice\u002Fcareer book: https:\u002F\u002Flearninpublic.org","smol.ai","San Francisco",null,"swyx","https:\u002F\u002Flearninpublic.org","https:\u002F\u002Fgithub.com\u002Fswyxio",[84],{"name":85,"color":86,"percentage":87},"HTML","#e34c26",100,6199,551,"2026-04-16T14:03:52","MIT","","未说明",{"notes":95,"python":93,"dependencies":96},"该仓库（ai-notes）并非一个可执行的 AI 软件工具或代码库，而是一个关于人工智能（特别是生成式 AI 和大语言模型）的状态、资源、案例研究和阅读列表的知识库\u002F笔记集合。它由多个 Markdown 文件组成，用于记录行业动态、教程链接和应用案例。因此，它不需要特定的操作系统、GPU、内存、Python 环境或依赖库来运行，仅需一个能够查看 Markdown 文件的文本编辑器或浏览器即可。",[],[13,14,98,15,35],"其他",[100,101,102,103,104,105,106],"ai","prompt-engineering","stable-diffusion","openai","gpt","gpt-3","multimodal","2026-03-27T02:49:30.150509","2026-04-18T00:45:31.927766",[],[]]