[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-basicmi--AI-Chip":3,"tool-basicmi--AI-Chip":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",154349,2,"2026-04-13T23:32:16",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":76,"owner_website":76,"owner_url":79,"languages":80,"stars":85,"forks":86,"last_commit_at":87,"license":76,"difficulty_score":88,"env_os":89,"env_gpu":90,"env_ram":90,"env_deps":91,"category_tags":94,"github_topics":95,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":101,"updated_at":102,"faqs":103,"releases":104},7387,"basicmi\u002FAI-Chip","AI-Chip","A list of ICs and IPs for AI, Machine Learning and Deep Learning.","AI-Chip 是一份专注于人工智能、机器学习及深度学习领域的集成电路（IC）与知识产权核（IP）全景指南。面对当前 AI 芯片赛道百家争鸣、技术迭代极快的现状，从业者往往难以全面掌握从科技巨头到初创公司的最新硬件动态。AI-Chip 正是为解决这一信息碎片化难题而生，它系统性地梳理并持续更新全球范围内的 AI 处理器资讯。\n\n这份清单不仅涵盖了 Nvidia、Google、Intel、Tesla 等行业领军企业的最新产品（如 Hopper 架构、Tensor 芯片、Dojo 系统等），还敏锐地追踪了 Groq、Cerebras、SambaNova 等新兴独角兽的创新进展。其独特亮点在于极高的时效性与广泛的覆盖面，定期整合 MLPerf 权威基准测试结果，并涉及传统芯片厂商及光计算、存内计算等前沿技术路线的突破。\n\nAI-Chip 非常适合硬件工程师、算法研究人员、系统架构师以及关注半导体行业的投资分析师使用。对于希望深入了解底层算力支撑、进行选型对比或把握技术趋势的专业人士而言，AI-Chip 提供了一站式的参考视野，帮助用户高效洞察全球 AI 算力生态的最新格局。","\u003Cdiv align=\"center\">\u003Ch1>AI Chip (ICs and IPs)\u003C\u002Fh1>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_83ff08c0f8b5.png\">\u003C\u002Fdiv>\n\u003Cbr>\n\u003Cdiv align=\"center\">Editor \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fshan-tang-27342510\u002F\">\u003Cstrong>S.T.\u003C\u002Fstrong>\u003C\u002Fa>(Linkedin)\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cstrong>Welcome to My Wechat Blog \u003Ca href=\"[https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FaxfIBbQBDhTJ2Zt7U5WQBw](https:\u002F\u002Fmp.weixin.qq.com\u002Fmp\u002Fappmsgalbum?action=getalbum&__biz=MzI3MDQ2MjA3OA==&scene=1&album_id=1374108991751782402&count=3#wechat_redirect)\">StarryHeavensAbove\u003C\u002Fa> for more AI chip related articles\u003C\u002Fstrong>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cstrong>欢迎访问我的微信公众号 \u003Ca href=\"[https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FaxfIBbQBDhTJ2Zt7U5WQBw](https:\u002F\u002Fmp.weixin.qq.com\u002Fmp\u002Fappmsgalbum?action=getalbum&__biz=MzI3MDQ2MjA3OA==&scene=1&album_id=1374108991751782402&count=3#wechat_redirect)\">StarryHeavensAbove\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_fec5ee2511dc.jpg\" height=\"100\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a38aee112c7a.png\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n \n\u003Cdiv align=\"center\">\u003Ch2>Latest updates\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\n\u003Cfont color=\"Darkred\">\n\u003Cul>\n\u003Cli>Add news of \u003Ca href=\"#SambaNova\">SambaNova\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Groq\">Groq\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Qualcomm\">Qualcomm\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Nvidia\">Nvidia\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add link to \u003Ca href=\"#AIChipBenchmarks\">Latest MLPerf Results from MLCommons\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#IBM\">IBM AIU\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Tesla\">Tesla Dojo\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add link to \u003Ca href=\"#AIChipBenchmarks\">Latest MLPerf Results from MLCommons\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Tachyum\">Tachyum Prodigy Universal Processor\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Habana\">Intel Habana Gaudi®2\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Modular\">Modular AI in AI compiler section\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Teramem\">TeraMem\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Aspinity\">Aspinity\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Synopsys\">Synopsys DesignWare ARC NPX6 NPU IP\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Nvidia\">Nvidia Hopper\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Graphcore\">Graphcore\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Ceremorphic\">Ceremorphic\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Lightelligence\">Lightelligence\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add link to \u003Ca href=\"#AIChipBenchmarks\">Latest MLPerf Results from MLCommons\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Habana\">Habana\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Google\">Google Tensor Chip\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Intel\">Intel Loihi 2\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Tesla\">Tesla Dojo\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Untether\">Untether AI\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Innatera\">Innatera Nanosystems\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#EdgeQ\">EdgeQ\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Quadric\">Quadric\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#AnalogInference\">Analog Inference\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Tenstorrent\">Tenstorrent\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Google\">Google\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#SiMa\">SiMa.ai\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add startup \u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Groq\">Groq\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#Nvidia\">Nvidia\u003C\u002Fa>.\u003C\u002Fli>\n\u003Cli>Add news of \u003Ca href=\"#SambaNova\">SambaNova\u003C\u002Fa>.\u003C\u002Fli>\n\u003C\u002Ful>\n\u003C\u002Ffont>\n\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>Shortcut\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\u003Ctable style=\"width:100%\">\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#IC_Vendors\">IC Vendors\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#Intel\">Intel\u003C\u002Fa>, \u003Ca href=\"#Qualcomm\">Qualcomm\u003C\u002Fa>, \u003Ca href=\"#Nvidia\">Nvidia\u003C\u002Fa>, \u003Ca href=\"#Samsung\">Samsung\u003C\u002Fa>, \u003Ca href=\"#AMD\">AMD\u003C\u002Fa>,\u003Ca href=\"#IBM\">IBM\u003C\u002Fa>, \u003Ca href=\"#Marvell\">Marvell\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#Tech_Giants\">Tech Giants & HPC Vendors\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#Google\">Google\u003C\u002Fa>, \u003Ca href=\"#Amazon_AWS\">Amazon_AWS\u003C\u002Fa>, \u003Ca href=\"#Microsoft\">Microsoft\u003C\u002Fa>, \u003Ca href=\"#Apple\">Apple\u003C\u002Fa>, \u003Ca href=\"#Alibaba\">Alibaba Group\u003C\u002Fa>, \u003Ca href=\"#Tencent_Cloud\">Tencent Cloud\u003C\u002Fa>, \u003Ca href=\"#Baidu\">Baidu\u003C\u002Fa>, \u003Ca href=\"#Fujitsu\">Fujitsu\u003C\u002Fa>, \u003Ca href=\"#Nokia\">Nokia\u003C\u002Fa>, \u003Ca href=\"#Facebook\">Facebook\u003C\u002Fa>, \u003Ca href=\"#Tesla\">Tesla\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#IP_Vendors\">IP Vendors\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#ARM\">ARM\u003C\u002Fa>, \u003Ca href=\"#Synopsys\">Synopsys\u003C\u002Fa>, \u003Ca href=\"#Imagination\">Imagination\u003C\u002Fa>, \u003Ca href=\"#CEVA\">CEVA\u003C\u002Fa>, \u003Ca href=\"#Cadence\">Cadence\u003C\u002Fa>, \u003Ca href=\"#VeriSilicon\">VeriSilicon\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>  \n    \u003Cth>\u003Ca href=\"#Startups_Worldwide\">Startups\u003C\u002Fa>\u003C\u002Fth>\n    \u003Ctd>\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>, \u003Ca href=\"#Graphcore\">Graphcore\u003C\u002Fa>, \u003Ca href=\"#Tenstorrent\">Tenstorrent\u003C\u002Fa>, \u003Ca href=\"#Blaize\">Blaize\u003C\u002Fa>, \u003Ca href=\"#Koniku\">Koniku\u003C\u002Fa>, \u003Ca href=\"#Adapteva\">Adapteva\u003C\u002Fa>, \u003Ca href=\"#Mythic\">Mythic\u003C\u002Fa>, \u003Ca href=\"#Brainchip\">BrainChip\u003C\u002Fa>, \u003Ca href=\"#Leepmind\">Leepmind\u003C\u002Fa>, \u003Ca href=\"#Groq\">Groq\u003C\u002Fa>, \u003Ca href=\"#Kneron\">Kneron\u003C\u002Fa>, \u003Ca href=\"#Esperanto\">Esperanto Technologies\u003C\u002Fa>, \u003Ca href=\"#GTI\">Gyrfalcon Technology\u003C\u002Fa>, \u003Ca href=\"#SambaNova\">SambaNova Systems\u003C\u002Fa>, \u003Ca href=\"#GreenWaves\">GreenWaves Technology\u003C\u002Fa>, \u003Ca href=\"#Lightelligence\">Lightelligence\u003C\u002Fa>, \u003Ca href=\"#Lightmatter\">Lightmatter\u003C\u002Fa>, \u003Ca href=\"#Hailo\">Hailo\u003C\u002Fa>,\u003Ca href=\"#Tachyum\">Tachyum\u003C\u002Fa>,\u003Ca href=\"#Alphaics\">AlphaICs\u003C\u002Fa>,\u003Ca href=\"#Syntiant\">Syntiant\u003C\u002Fa>, \u003Ca href=\"#aiCTX\">aiCTX\u003C\u002Fa>, \u003Ca href=\"#Flexlogix\">Flex Logix\u003C\u002Fa>, \u003Ca href=\"#PFN\">Preferred Network\u003C\u002Fa>, \u003Ca href=\"#Cornami\">Cornami\u003C\u002Fa>, \u003Ca href=\"#Anaflash\">Anaflash\u003C\u002Fa>, \u003Ca href=\"#Optalysys\">Optaylsys\u003C\u002Fa>, \u003Ca href=\"#etacompute\">Eta Compute\u003C\u002Fa>, \u003Ca href=\"#Achronix\">Achronix\u003C\u002Fa>, \u003Ca href=\"#Areanna\">Areanna AI\u003C\u002Fa>, \u003Ca href=\"#Neuroblade\">Neuroblade\u003C\u002Fa>, \u003Ca href=\"#Luminous\">Luminous Computing\u003C\u002Fa>, \u003Ca href=\"#Efinix\">Efinix\u003C\u002Fa>, \u003Ca href=\"#AIstorm\">AISTORM\u003C\u002Fa>, \u003Ca href=\"#SiMa\">SiMa.ai\u003C\u002Fa>,\u003Ca href=\"#Untether\">Untether AI\u003C\u002Fa>, \u003Ca href=\"#GrAI\">GrAI Matter Lab\u003C\u002Fa>, \u003Ca href=\"#Rain\">Rain Neuromorphics\u003C\u002Fa>, \u003Ca href=\"#ABR\">Applied Brain Research\u003C\u002Fa>, \u003Ca href=\"#Xmos\">XMOS\u003C\u002Fa>, \u003Ca href=\"#DinoplusAI\">DinoPlusAI\u003C\u002Fa>, \u003Ca href=\"#Furiosa\">Furiosa AI\u003C\u002Fa>, \u003Ca href=\"#Perceive\">Perceive\u003C\u002Fa>, \u003Ca href=\"#SimpleMachines\">SimpleMachines\u003C\u002Fa>, \u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>, \u003Ca href=\"#AnalogInference\">Analog Inference\u003C\u002Fa>, \u003Ca href=\"#Quadric\">Quadric\u003C\u002Fa>, \u003Ca href=\"#EdgeQ\">EdgeQ\u003C\u002Fa>, \u003Ca href=\"#Innatera\">Innatera Nanosystems\u003C\u002Fa>, \u003Ca href=\"#Ceremorphic\">Ceremorphic\u003C\u002Fa>, \u003Ca href=\"#Aspinity\">Aspinity\u003C\u002Fa>, \u003Ca href=\"#Teramem\">TeraMem, \u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"IC_Vendors\">\u003C\u002Fa>I. IC Vendors\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Nvidia\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_74f7256a47eb.png\" height=\"50\"> \u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3>GPU\u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fnvidianews.nvidia.com\u002Fnews\u002Fnvidia-microsoft-accelerate-cloud-enterprise-ai\">NVIDIA Teams With Microsoft to Build Massive Cloud AI Computer\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Tens of Thousands of NVIDIA GPUs, NVIDIA Quantum-2 InfiniBand and Full Stack of NVIDIA AI Software Coming to Azure; NVIDIA, Microsoft and Global Enterprises to Use Platform for Rapid, Cost-Effective AI Development and Deployment\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fnvidia-hopper-architecture-in-depth\">NVIDIA Hopper Architecture In-Depth\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Today during the 2022 NVIDIA GTC Keynote address, NVIDIA CEO Jensen Huang introduced the new NVIDIA H100 Tensor Core GPU based on the new NVIDIA Hopper GPU architecture. This post gives you a look inside the new H100 GPU and describes important new features of NVIDIA Hopper architecture GPUs.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16610\u002Fnvidia-unveils-grace-a-highperformance-arm-server-cpu-for-use-in-ai-systems\">NVIDIA Unveils Grace: A High-Performance Arm Server CPU For Use In Big AI Systems\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Kicking off another busy Spring GPU Technology Conference for NVIDIA, this morning the graphics and accelerator designer is announcing that they are going to once again design their own Arm-based CPU\u002FSoC. Dubbed Grace – after Grace Hopper, the computer programming pioneer and US Navy rear admiral – the CPU is NVIDIA’s latest stab at more fully vertically integrating their hardware stack by being able to offer a high-performance CPU alongside their regular GPU wares. According to NVIDIA, the chip is being designed specifically for large-scale neural network workloads, and is expected to become available in NVIDIA products in 2023.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Intel\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a7a82a1976a6.png\" height=\"60\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Ca name=\"Mobileye\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Ch3>Mobileye EyeQ\u003C\u002Fh3>\u003C\u002Fdiv>\n> Mobileye is currently developing its fifth generation SoC, the \u003Ca href=\"https:\u002F\u002Fwww.mobileye.com\u002Four-technology\u002Fevolution-eyeq-chip\u002F\">EyeQ®5\u003C\u002Fa>, to act as the vision central computer performing sensor fusion for Fully Autonomous Driving (Level 5) vehicles that will hit the road in 2020. To meet power consumption and performance targets, EyeQ® SoCs are designed in most advanced VLSI process technology nodes – down to 7nm FinFET in the 5th generation. \n\n\u003Ca name=\"Loihi 2\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Ch3>Loihi\u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fnewsroom\u002Fnews\u002Fintel-unveils-neuromorphic-loihi-2-lava-software.html\">Intel Advances Neuromorphic with Loihi 2, New Lava Software Framework and New Partners\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Second-generation research chip uses pre-production Intel 4 process, grows to 1 million neurons. Intel adds open software framework to accelerate developer innovation and path to commercialization.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Habana\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ch3>Habana\u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fnewsroom\u002Fnews\u002Fvision-2022-habana-gaudi2-greco.html\">Intel’s Habana Labs Launches Second-Generation AI Processors for Training and Inferencing\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Today at Intel Vision, Intel announced that Habana Labs, its data center team focused on AI deep learning processor technologies, launched its second-generation deep learning processors for training and inference: Habana® Gaudi®2 and Habana® Greco™. These new processors address an industry gap by providing customers with high-performance, high-efficiency deep learning compute choices for both training workloads and inference deployments in the data center while lowering the AI barrier to entry for companies of all sizes.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fhabana.ai\u002Faws-launches-ec2-dl1-instances\u002F\">Habana Gaudi debuts in the Amazon EC2 cloud\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The primary motivation to create this new training instance class was presented by Andy Jassy in the 2020 re:Invent: “To provide our end-customers with up to 40% better price-performance than the current generation of GPU-based instances.”\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Qualcomm\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_02281afe6520.png\" height=\"40\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Ca href=\"https:\u002F\u002Fwww-forbes-com.cdn.ampproject.org\u002Fc\u002Fs\u002Fwww.forbes.com\u002Fsites\u002Fkarlfreund\u002F2022\u002F11\u002F16\u002Fqualcomm-ups-the-snapgragon-ai-game\u002Famp\u002F\">Qualcomm Ups The Snapgragon AI Game\u003C\u002Fa>\n\u003Cblockquote>\n  \u003Cp>The leader in premium mobile SoCs has applied AI across the entire platform.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.qualcomm.com\u002Fproducts\u002Ftechnology\u002Fprocessors\u002Fcloud-artificial-intelligence\u002Fcloud-ai-100\">Qualcomm Cloud AI 100\u003C\u002Fa>\u003C\u002Fstrong>\n\u003Cblockquote>\n  \u003Cp>The Qualcomm Cloud AI 100, designed for AI inference acceleration, addresses unique requirements in the cloud, including power efficiency, scale, process node advancements, and signal processing—facilitating the ability of datacenters to run inference on the edge cloud faster and more efficiently. Qualcomm Cloud AI 100 is designed to be a leading solution for datacenters who increasingly rely on infrastructure at the edge-cloud.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Samsung\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ad6b3054b304.png\" height=\"35\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnews.samsung.com\u002Fglobal\u002Fsamsung-brings-on-device-ai-processing-for-premium-mobile-devices-with-exynos-9-series-9820-processor\">Samsung Brings On-device AI Processing for Premium Mobile Devices with Exynos 9 Series 9820 Processor\u003C\u002Fa>\u003C\u002Fstrong>\n> Fourth-generation custom core and 2.0Gbps LTE Advanced Pro modem enables enriched mobile experiences including AR and VR applications \n\n\u003Cbr> \nSamsung resently unveiled “\u003Ca href=\"https:\u002F\u002Fnews.samsung.com\u002Fglobal\u002Fsamsung-optimizes-premium-exynos-9-series-9810-for-ai-applications-and-richer-multimedia-content\">The new Exynos 9810 brings premium features with a 2.9GHz custom CPU, an industry-first 6CA LTE modem and deep learning processing capabilities\u003C\u002Fa>”.   \n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"AMD\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_957f712e31f6.png\" height=\"35\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\nThe soon to be released \u003Ca href=\"https:\u002F\u002Fwww.amd.com\u002Fen\u002Fgraphics\u002Finstinct-server-accelerators\">AMD Instinct™ MI Series Accelerators\u003C\u002Fa>\n> AMD Instinct™ accelerators are engineered from the ground up for this new era of data center computing, supercharging HPC and AI workloads to propel new discoveries. The AMD Instinct™ family of accelerators can deliver industry leading performance for the data center at any scale from single server solutions up to the world’s largest supercomputers.1 With new innovations in AMD CDNA™ 2 architecture, AMD Infinity Fabric™ technology and packaging technology, the latest AMD Instinct™ accelerators are designed to power discoveries at exascale, enabling scientists to tackle our most pressing challenges.\n\n\u003Cp>\u003Ca name=\"IBM\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5beb2661c40d.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fsystems\u002Fibm-telum-processor-the-next-gen-microprocessor-for-ibm-z-and-ibm-linuxone\u002F\">Meet the IBM Artificial Intelligence Unit\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>It’s our first complete system-on-chip designed to run and train deep learning models faster and more efficiently than a general-purpose CPU.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fsystems\u002Fibm-telum-processor-the-next-gen-microprocessor-for-ibm-z-and-ibm-linuxone\u002F\">IBM Telum Processor: the next-gen microprocessor for IBM Z and IBM LinuxONE\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>The 7 nm microprocessor is engineered to meet the demands our clients face for gaining AI-based insights from their data without compromising response time for high volume transactional workloads. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fresearch\u002Ftag\u002Ftruenorth\u002F\">TrueNorth\u003C\u002Fa> is IBM's Neuromorphic CMOS ASIC developed in conjunction with the DARPA \u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSyNAPSE\">SyNAPSE\u003C\u002Fa> program.\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>It is a manycore processor network on a chip design, with 4096 cores, each one simulating 256 programmable silicon \"neurons\" for a total of just over a million neurons. In turn, each neuron has 256 programmable \"synapses\" that convey the signals between them. Hence, the total number of programmable synapses is just over 268 million (228). In terms of basic building blocks, its transistor count is 5.4 billion. Since memory, computation, and communication are handled in each of the 4096 neurosynaptic cores, TrueNorth circumvents the von-Neumann-architecture bottlenecks and is very energy-efficient, consuming 70 milliwatts, about 1\u002F10,000th the power density of conventional microprocessors. \u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTrueNorth\">Wikipedia\u003C\u002Fa>\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.research.ibm.com\u002Fartificial-intelligence\u002Fai-hardware-center\u002F\">AI Hardware Center\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>\"The IBM Research AI Hardware Center is a global research hub headquartered in Albany, New York. The center is focused on enabling next-generation chips and systems that support the tremendous processing power and unprecedented speed that AI requires to realize its full potential.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Marvell\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2bad2c36c703.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.marvell.com\u002Fproducts\u002Fdata-processing-units.html\">Data Processing Units\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Built on seven generations of the industry’s first, most scalable and widely adopted data infrastructure processors, Marvell’s OCTEON™, OCTEON™ Fusion and ARMADA® platforms are optimized for wireless infrastructure, wireline carrier networks, enterprise and cloud data centers.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"Tech_Giants\">\u003C\u002Fa>II. Tech Giants & HPC Vendors\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Google\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9e09b3d608f6.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fgoogle-tensor-everything-you-need-to-know-about-the-pixel-6-chip\u002F\">Google Tensor: Everything you need to know about the Pixel 6 chip\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Google has taken the wraps off its latest Pixel smartphones and, among the changes, the one with the biggest long-term impact is the switch to in-house silicon for the search giant.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002F2021\u002F05\u002F20\u002Fgoogle-launches-tpu-v4-ai-chips\u002F\">Google Launches TPU v4 AI Chips\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Google CEO Sundar Pichai spoke for only one minute and 42 seconds about the company’s latest TPU v4 Tensor Processing Units during his keynote at the Google I\u002FO virtual conference this week, but it may have been the most important and awaited news from the event.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fcloud.google.com\u002Ftpu\">Cloud TPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Machine learning has produced business and research breakthroughs ranging from network security to medical diagnoses. We built the Tensor Processing Unit (TPU) in order to make it possible for anyone to achieve similar breakthroughs. Cloud TPU is the custom-designed machine learning ASIC that powers Google products like Translate, Photos, Search, Assistant, and Gmail. Here’s how you can put the TPU and machine learning to work accelerating your company’s success, especially at scale.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fcloud.google.com\u002Fedge-tpu\u002F\">Edge TPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AI is pervasive today, from consumer to enterprise applications. With the explosive growth of connected devices, combined with a demand for privacy\u002Fconfidentiality, low latency, and bandwidth constraints, AI models trained in the cloud increasingly need to be run at the edge. Edge TPU is Google’s purpose-built ASIC designed to run AI at the edge. It delivers high performance in a small physical and power footprint, enabling the deployment of high-accuracy AI at the edge.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>Other references are:\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fb22p26_delWfSpy9kDJKhA\">Google TPU3 看点\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FKf_L4u7JRxJ8kF3Pi8M5iw\">Google TPU 揭密\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FlBQyNSNa6-joeLZ_Kq2W8A\">Google的神经网络处理器专利\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fg-BDlvSy-cx4AKItcWF7jQ\">脉动阵列 - 因Google TPU获得新生\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fshould-we-all-embrace-systolic-arrays-chien-ping-lu\">Should We All Embrace Systolic Arrays?\u003C\u002Fa>\u003Cbr>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Amazon_AWS\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a6c1d3963794.png\" height=\"50\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fmachine-learning\u002Ftrainium\u002F\">AWS Trainium\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AWS Trainium is the second custom machine learning (ML) chip designed by AWS that provides the best price performance for training deep learning models in the cloud.  Trainium offers the highest performance with the most teraflops (TFLOPS) of compute power for the fastest ML training in Amazon EC2 and enables a broader set of ML applications. The Trainium chip is specifically optimized for deep learning training workloads for applications including image classification, semantic search, translation, voice recognition, natural language processing and recommendation engines.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fmachine-learning\u002Finferentia\u002F\">AWS Inferentia. High performance machine learning inference chip, custom designed by AWS.\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AWS Inferentia provides high throughput, low latency inference performance at an extremely low cost. Each chip provides hundreds of TOPS (tera operations per second) of inference throughput to allow complex models to make fast predictions. For even more performance, multiple AWS Inferentia chips can be used together to drive thousands of TOPS of throughput. AWS Inferentia will be available for use with Amazon SageMaker, Amazon EC2, and Amazon Elastic Inference.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Microsoft\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a86fa93ff278.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Apple\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_cf1b0657dab9.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Alibaba\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2bd64dc8bf29.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmedium.com\u002Fsyncedreview\u002Falibabas-new-ai-chip-can-process-nearly-80k-images-per-second-63412dec22a3\">Alibaba’s New AI Chip Can Process Nearly 80K Images Per Second\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>At the Alibaba Cloud (Aliyun) Apsara Conference 2019, Pingtouge unveiled its first AI dedicated processor for cloud-based large-scale AI inferencing. The Hanguang 800 is the first semiconductor product in Alibaba’s 20-year history.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tencent_Cloud\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_c4dc6b4e2279.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.datacenterdynamics.com\u002Fen\u002Fnews\u002Ftencent-reveals-three-data-center-chips-for-ai-video-transcoding-and-networking\u002F\">Tencent reveals three data center chips - for AI, video transcoding, and networking\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The company claims that the Zixiao AI chip is twice as good as comparable competing products, video transcoding chip Canghai was 30 percent better, and SmartNIC Xuanling was apparently four times as good. It did not provide external benchmarks or specific product details.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cbr \u002F>\n\u003Ca name=\"Baidu\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d68736ac9bad.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.reuters.com\u002Ftechnology\u002Fbaidu-says-2nd-gen-kunlun-ai-chips-enter-mass-production-2021-08-18\u002F\">Baidu says 2nd-gen Kunlun AI chips enter mass production\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Chinese tech giant Baidu said on Wednesday it had begun mass-producing second-generation Kunlun artificial intelligence (AI) chips, as it races to become a key player in the chip industry which Beijing is trying to strengthen.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Fujitsu\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_b558b2bdbeec.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>This \u003Ca href=\"https:\u002F\u002Fwww.nextplatform.com\u002F2017\u002F08\u002F09\u002Ffujitsu-bets-deep-leaning-hpc-divergence\u002F\">DLU that Fujitsu is creating\u003C\u002Fa> is done from scratch, and it is not based on either the Sparc or ARM instruction set and, in fact, it has its own instruction set and a new data format specifically for deep learning, which were created from scratch. \n  Japanese computing giant Fujitsu. Which knows a thing or two about making a very efficient and highly scalable system for HPC workloads, as evidenced by the K supercomputer, does not believe that the HPC and AI architectures will converge. Rather, the company is banking on the fact that these architectures will diverge and will require very specialized functions. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Nokia\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_368beff98e86.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>Nokia has developed the \u003Ca href=\"https:\u002F\u002Fnetworks.nokia.com\u002F5g\u002Freefshark\">ReefShark chipsets\u003C\u002Fa> for its 5G network solutions. AI is implemented in the ReefShark design for radio and embedded in the baseband to use augmented deep learning to trigger smart, rapid actions by the autonomous, cognitive network, enhancing network optimization and increasing business opportunities.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Facebook\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9401f7667bc1.png\" height=\"50\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.reuters.com\u002Ftechnology\u002Ffacebook-developing-machine-learning-chip-information-2021-09-09\u002F\">Facebook developing machine learning chip - The Information\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Facebook Inc (FB.O) is developing a machine learning chip to handle tasks such as content recommendation to users, The Information reported on Thursday, citing two people familiar with the project.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tesla\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2881b6c5c985.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Fjamesmorris\u002F2022\u002F10\u002F06\u002Fteslas-biggest-news-at-ai-day-was-the-dojo-supercomputer-not-the-optimus-robot\u002F\">Tesla’s Biggest News At AI Day Was The Dojo Supercomputer, Not The Optimus Robot\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Elon Musk played AI Day to the crowd with the focus on the Optimus humanoid robot. But while this could have a huge impact on our lives and society if it does enter mass production at the price Musk suggested ($20,000), another part of the presentation will have more immediate effects. That was the status report on the Dojo supercomputer. It could really change the world much more quickly than a bipedal bot.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsemianalysis.com\u002Ftesla-dojo-ai-super-computer-unique-packaging-and-chip-design-allow-an-order-magnitude-advantage-over-competing-ai-hardware\u002F\">Tesla Dojo – Unique Packaging and Chip Design Allow An Order Magnitude Advantage Over Competing AI Hardware\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Tesla hosted their AI Day and revealed the innerworkings of their software and hardware infrastructure. Part of this reveal was the previously teased Dojo AI training chip. Tesla claims their D1 Dojo chip has a GPU level compute, CPU level flexibility, with networking switch IO. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"IP_Vendors\">\u003C\u002Fa>III. Traditional IP Vendors\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"ARM\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1d3f209f5406.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca href=\"https:\u002F\u002Fwww.arm.com\u002Fproducts\u002Fsilicon-ip-cpu\u002Fethos\u002Fethos-n78\">NPU ETHOS-N78\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Specifically designed for inference at the edge, the ML processor gives an industry-leading performance of 4.6 TOPs, with a stunning efficiency of 3 TOPs\u002FW for mobile devices and smart IP cameras.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F12791\u002Farm-details-project-trillium-mlp-architecture\">ARM Details \"Project Trillium\" Machine Learning Processor Architecture\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Arm’s second-generation, highly scalable and efficient NPU, the Ethos-N78 enables new immersive applications with a 2.5x increase in single-core performance now scalable from 1 to 10 TOP\u002Fs and beyond through many-core technologies. It provides flexibility to optimize the ML capability with 90+ configurations.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Synopsys\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_43dafbf7ed4d.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnews.synopsys.com\u002F2022-04-19-Synopsys-Introduces-Industrys-Highest-Performance-Neural-Processor-IP\">Synopsys Introduces Industry's Highest Performance Neural Processor IP\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>New DesignWare ARC NPX6 NPU IP Delivers Up to 3,500 TOPS Performance for Automotive, Consumer and Data Center Chip Designs\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Imagination\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_778765511bd5.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.imaginationtech.com\u002Fproducts\u002Fai\u002F\">AI Processors\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Whether you want smartness residing in the palm of your hand, consumer products or industrial robots, or enabled by powerful servers in the cloud, we can help you achieve your vision. We enable the smartness in your products with our PowerVR Neural Network Accelerators (NNA) and GPUs. Our NC-SDK enables seamless deployment of AI acceleration on either our hardware IP either in isolation or combined. Our NNA provides maximum efficiency with a scalable architecture which enables a wide range of smart edge and end point devices from low performance IoT to high performance RoboTaxi.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"CEVA\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_c66247b85550.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ceva-dsp.com\u002Fapp\u002Fdeep-learning\u002F\">Deep learning for the real-time embedded world\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>One solution lies in supplying a dedicated low power AI processor for Deep Learning at the edge, combined with a deep neural network (DNN) graph compiler\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"Cadence\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4c47686a880a.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.cadence.com\u002Fen_US\u002Fhome\u002Ftools\u002Fip\u002Ftensilica-ip\u002Ftensilica-ai-platform.html\">Tensilica AI Platform\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"VeriSilicon\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_aae253b62444.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.verisilicon.com\u002Fen\u002FIPPortfolio\u002FVivanteNPUIP\">Vivante® NPU IP\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>VeriSilicon's Neural Network Processor (NPU) IP is a highly scalable, programmable computer vision and artificial intelligence processor that supports AI operations upgrades for endpoints, edge devices, and cloud devices. Designed to meet a variety of chip sizes and power budgets, the Vivante NPU IP is a cost-effective, high-quality neural network acceleration engine solution.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"Startups\">\u003C\u002Fa>IV. Startups\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Cerebras\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_3e483f76b984.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002Fpress-release\u002Fcerebras-unveils-andromeda-a-13.5-million-core-ai-supercomputer-that-delivers-near-perfect-linear-scaling-for-large-language-models\">Cerebras Unveils Andromeda, a 13.5 Million Core AI Supercomputer that Delivers Near-Perfect Linear Scaling for Large Language Models\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Delivering more than 1 Exaflop of AI compute and 120 Petaflops of dense compute, Andromeda is one of the largest AI supercomputers ever built, and is dead simple to use\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002Fblog\u002Fcerebras-sets-record-for-largest-ai-models-ever-trained-on-single-device\">Cerebras Sets Record for Largest AI Models Ever Trained on Single Device\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>We are announcing the largest models ever trained on a single device. Using the Cerebras Software Platform (CSoft), our customers can easily train state-of-the-art GPT language models (such as GPT-3[i] and GPT-J[ii]) with up to 20 billion parameters on a single CS-2 system. Running on a single CS-2, these models take minutes to set up and users can quickly move between models with just a few keystrokes. With clusters of GPUs, this takes months of engineering work.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F17061\u002Fcerebras-completes-series-f-funding-another-250m-for-4b-valuation\">Cerebras Completes Series F Funding, Another $250M for $4B Valuation\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The new Series F funding round nets the company another $250m in capital, bringing the total raised through venture capital up to $720 million.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16626\u002Fcerebras-unveils-wafer-scale-engine-two-wse2-26-trillion-transistors-100-yield\">Cerebras Unveils Wafer Scale Engine Two (WSE2): 2.6 Trillion Transistors, 100% Yield\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Two years ago Cerebras unveiled a revolution in silicon design: a processor as big as your head, using as much area on a 12-inch wafer as a rectangular design would allow, built on 16nm, focused on both AI as well as HPC workloads. Today the company is launching its second generation product, built on TSMC 7nm, with more than double the cores and more than double of everything.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2019\u002F11\u002F19\u002Fthe-cerebras-cs-1-computes-deep-learning-ai-problems-by-being-bigger-bigger-and-bigger-than-any-other-chip\u002F\">The Cerebras CS-1 computes deep learning AI problems by being bigger, bigger, and bigger than any other chip\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Today, the company announced the launch of its end-user compute product, the Cerebras CS-1, and also announced its first customer of Argonne National Laboratory.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Graphcore\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.graphcore.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_63a3e48f24a3.png\" height=\"70\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fgraphcore-supercharges-ipu-with-wafer-on-wafer\u002F\">Graphcore Supercharges IPU with Wafer-on-Wafer\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Graphcore unveiled its third-generation intelligence processing unit (IPU), the first processor to be built using 3D wafer-on-wafer (WoW) technology.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.graphcore.ai\u002Fmk2-benchmarks\">MK2 PERFORMANCE BENCHMARKS\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2020\u002F02\u002F24\u002Fgraphcore-the-ai-chipmaker-raises-another-150m-at-a-1-95b-valuation\u002F\">Graphcore, the AI chipmaker, raises another $150M at a $1.95B valuation\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Graphcore, the Bristol-based startup that designs processors specifically for artificial intelligence applications, announced it has raised another $150 million in funding for R&D and to continue bringing on new customers. It’s valuation is now $1.95 billion.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FCH9h8dUtoNK_2ZfkK5YU0g\">解密又一个xPU：Graphcore的IPU\u003C\u002Fa> give some analysis on its IPU architecture.\u003C\u002Fp>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FAMuqeaShqEv3DnibH3scEA\">Graphcore AI芯片：更多分析\u003C\u002Fa> More analysis.\u003C\u002Fp>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FqP0zsSA7SQWXDqWGEAXmOg\">深度剖析AI芯片初创公司Graphcore的IPU\u003C\u002Fa> In-depth analysis after more information was disclosed.\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tenstorrent\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Ftenstorrent.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4eb29e830cdc.png\" height=\"100\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Ftenstorrent-raises-over-200-million-at-1-billion-valuation-to-create-programmable-high-performance-ai-computers-301295913.html\">Tenstorrent Raises over $200 million at $1 billion Valuation to Create Programmable, High Performance AI Computers\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>TORONTO, May 20, 2021 \u002FPRNewswire\u002F - Tenstorrent, a hardware start-up developing next generation computers, announced today that it has raised over $200 million in a recent funding round that values the company at $1 billion. The round was led by Fidelity Management and Research Company and includes additional investments from Eclipse Ventures, Epic CG and Moore Capital. \u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16709\u002Fan-interview-with-tenstorrent-ceo-ljubisa-bajic-and-cto-jim-keller\">An Interview with Tenstorrent: CEO Ljubisa Bajic and CTO Jim Keller\u003C\u002Fa>\u003C\u002Fp>\n \n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Blaize\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.blaize.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_234bc48ab85e.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fautomotive-ai-startup-blaize-closes-71-million-funding-round\u002F\">Automotive AI Startup Blaize Closes $71 Million Funding Round\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Blaize, formerly ThinCI, has closed a Series D round of funding at $71 million. New investor Franklin Templeton and existing investor Temasek led the round, along with participation from Denso and other new and existing investors. This round brings Blaize’s total funding to around $155 million total.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Koniku\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fkoniku.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2a7c88b55b59.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>Founded in 2014, Newark, California startup \u003Ca href=\"http:\u002F\u002Fkoniku.io\u002F\">Koniku\u003C\u002Fa> has taken in $1.65 million in funding so far to become “the world’s first neurocomputation company“. The idea is that since the brain is the most powerful computer ever devised, why not reverse engineer it? Simple, right? Koniku is actually integrating biological neurons onto chips and has made enough progress that they claim to have AstraZeneca as a customer. Boeing has also signed on with a letter of intent to use the technology in chemical-detecting drones.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Adapteva\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.adapteva.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_8c95340c84dc.png\" height=\"70\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"http:\u002F\u002Fwww.adapteva.com\u002F\">Adapteva\u003C\u002Fa> has taken in $5.1 million in funding from investors that include mobile giant Ericsson. \u003Ca href=\"http:\u002F\u002Fwww.parallella.org\u002Fdocs\u002Fe5_1024core_soc.pdf\">The paper \"Epiphany-V: A 1024 processor 64-bit RISC System-On-Chip\"\u003C\u002Fa> describes the design of Adapteva's 1024-core processor chip in 16nm FinFet technology. \u003C\u002Fp>\n\n\u003Cp>\u003Ca name=\"Mythic\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fmythic.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ffe103ad5856.png\" height=\"20\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fera-analog-compute-has-arrived-michael-b-henry\u002F\">The Era of Analog Compute has Arrived!\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>ResNet-50 in our prototype analog AI processor. Production release will support 900-1000 fps and INT8 accuracy at 3W.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2021\u002F06\u002F07\u002Fmythic-launches-analog-ai-processor-that-consumes-10-times-less-power\u002F\">Mythic launches analog AI processor that consumes 10 times less power\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Analog AI processor company Mythic launched its M1076 Analog Matrix Processor today to provide low-power AI processing.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Brainchip\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.brainchipinc.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_80042cb380b0.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"hhttps:\u002F\u002Fventurebeat.com\u002F2022\u002F01\u002F18\u002Fbrainchip-launches-neuromorphic-process-for-ai-at-the-edge\u002F\">BrainChip launches neuromorphic process for AI at the edge\u003C\u002Fa> \u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>BrainChip today announced the commercialization of its Akida neural networking processor. Aimed at a variety of edge and internet of things (IoT) applications, BrainChip claims to be the first commercial producer of neuromorphic AI chips, which could deliver benefits in ultra-low power and performance over conventional approaches.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Deepvision\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fdeepvision.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1ab05a056ec2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"AI Processor Chipmaker Deep Vision Raises $35 Million in Series B Funding\">AI Processor Chipmaker Deep Vision Raises $35 Million in Series B Funding\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Tiger Global Leads Series B Financing, Enabling Deep Vision to Expand Video Analytics and Natural Language Processing Capabilities in Edge Computing Applications\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Groq\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca href=\"http:\u002F\u002Fgroq.com\u002F\">Groq\u003C\u002Fa>\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fgroq-demos-fast-llms-on-4-year-old-silicon\u002F\">Groq Demonstrates Fast LLMs on 4-Year-Old Silicon\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>MOUNTAIN VIEW, CALIF. — Groq has repositioned its first-generation AI inference chip as a language processing unit (LPU), and demonstrated Meta’s Llama-2 70-billion–parameter large language model (LLM) running inference at 240 tokens per second per user. Groq CEO Jonathan Ross told EE Times that the company had Llama-2 up and running on the company’s 10-rack (64-chip) cloud-based dev system in “a couple of days.” This system is based on the company’s first gen AI silicon, released four years ago.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Famyfeldman\u002F2021\u002F04\u002F14\u002Fai-chip-startup-groq-founded-by-ex-googlers-raises-300-million-to-power-autonomous-vehicles-and-data-centers\u002F\">AI Chip Startup Groq, Founded By Ex-Googlers, Raises $300 Million To Power Autonomous Vehicles And Data Centers\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Jonathan Ross left Google to launch next-generation semiconductor startup Groq in 2016. Today, the Mountain View, California-based firm said that it had raised $300 million led by Tiger Global Management and billionaire investor Dan Sundheim’s D1 Capital as it officially launched into public view. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Kneron\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.kneron.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6fa54b62835b.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Fkneron-to-accelerate-edge-ai-development-with-more-than-10-million-usd-series-a-financing-300556674.html\">Kneron to Accelerate Edge AI Development with more than 10 Million USD Series A Financing\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"GTI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.gyrfalcontech.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d08a29c36ae6.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>According to this article, \u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Fgyrfalcon-offers-automotive-ai-chip-technology-300860069.html\">\"Gyrfalcon offers Automotive AI Chip Technology\"\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Gyrfalcon Technology Inc. (GTI), has been promoting matrix-based application specific chips for all forms of AI since offering their production versions of AI accelerator chips in September 2017. Through the licensing of its proprietary technology, the company is confident it can help automakers bring highly competitive AI chips to production for use in vehicles within 18 months, along with significant gains in AI performance, improvements in power dissipation and cost advantages.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"SambaNova\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fsambanovasystems.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6f7e319e9b2c.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002Fai\u002Fsambanova-unveils-new-ai-chip-to-power-full-stack-ai-platform\u002F\">SambaNova unveils new AI chip to power full-stack AI platform\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Today Palo-Alto-based SambaNova Systems unveiled a new AI chip, the SN40L, which will power its full-stack large language model (LLM) platform, the SambaNova Suite, that helps enterprises go from chip to model — building and deploying customized generative AI models.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F04\u002F13\u002Fsambanova-raises-676m-at-a-5-1b-valuation-to-double-down-on-cloud-based-ai-software-for-enterprises\u002F\">SambaNova raises $676M at a $5.1B valuation to double down on cloud-based AI software for enterprises\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova — a startup building AI hardware and integrated systems that run on it that only officially came out of three years in stealth last December — is announcing a huge round of funding today to take its business out into the world. The company has closed on $676 million in financing, a Series D that co-founder and CEO Rodrigo Liang has confirmed values the company at $5.1 billion.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsambanova.ai\u002Farticles\u002Fintroducing-sambanova-systems-datascale-a-new-era-of-computing\u002F\">Introducing SambaNova Systems DataScale: A New Era of Computing\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova has been working closely with many organizations the past few months and has established a new state of the art in NLP. This advancement in NLP deep learning is illustrated by a GPU-crushing, world record performance result achieved on SambaNova Systems’ Dataflow-optimized system. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsambanova.ai\u002Fa-new-state-of-the-art-in-nlp-beyond-gpus\u002F\">A New State of the Art in NLP: Beyond GPUs\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova has been working closely with many organizations the past few months and has established a new state of the art in NLP. This advancement in NLP deep learning is illustrated by a GPU-crushing, world record performance result achieved on SambaNova Systems’ Dataflow-optimized system. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"GreenWaves\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fgreenwaves-technologies.com\u002Fen\u002Fgreenwaves-technologies-2\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_676acbe60bfe.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.eu\u002Fgreenwaves-shows-off-advanced-audio-demos\u002F\">GreenWaves Shows Off Advanced Audio Demos\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The Gap9 processor, a successor to Gap8 which targets computer vision in IoT devices, is an ultra-low power neural network processor suitable for battery-powered devices. GreenWaves’ vice president of marketing Martin Croome told EE Times Europe that the company decided to focus Gap9 on the hearables market after receiving traction from this sector for Gap8.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Lightelligence\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.lightelligence.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_65b5f4974105.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Foptical-computing-chip-runs-hardest-math-problems-100x-faster-than-gpus\u002F\">Optical Chip Solves Hardest Math Problems Faster than GPUs\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Optical computing startup Lightelligence has demonstrated a silicon photonics accelerator running the Ising problem more than 100 times faster than a typical GPU setup.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Lightmatter\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.lightmatter.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_232477d0010b.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Flightmatter-raises-more-funding-for-photonic-ai-chip\u002F\">Lightmatter Raises More Funding for Photonic AI Chip\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>ightmatter, the MIT spinout building AI accelerators with a silicon photonics computing engine, announced a Series B funding round, raising an additional $80 million. The company’s technology is based on proprietary silicon photonics technology which manipulates coherent light inside a chip to perform calculations very quickly while using very little power\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Hailo\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.hailotech.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6364057efc82.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Funicorn-ai-chipmaker-hailo-raises-136-million\u002F\">‘Unicorn’ AI Chipmaker Hailo Raises $136 Million\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Israeli AI chip startup Hailo has raised $136 million in a Series C funding round, bringing the company’s total to $224 million. The company has also reportedly reached “unicorn” status.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tachyum\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.tachyum.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_f9512d345867.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002Foff-the-wire\u002Ftachyum-launches-prodigy-universal-processor\u002F\">Tachyum Launches Prodigy Universal Processor\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>May 11, 2021 — Tachyum today launched the world’s first universal processor, Prodigy, which unifies the functionality of a CPU, GPU and TPU in a single processor, creating a homogeneous architecture, while delivering massive performance improvements at a cost many times less than competing products\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Alphaics\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.alphaics.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9ba8d6508cc9.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Falphaics-begins-sampling-its-deep-learning-co-processor\u002F\">AlphaICs Begins Sampling Its Deep Learning Co-Processor\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AlphaICs, a startup developing edge AI and learning silicon aimed at smart vision applications, is sampling its deep learning co-processor, Gluon, that also comes with a software development kit.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Syntiant\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.syntiant.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_fb1309e0fd55.png\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsemiengineering.com\u002Fsyntiant-analog-deep-learning-chips\u002F\">Syntiant: Analog Deep Learning Chips\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Startup Syntiant Corp. is an Irvine, Calif. semiconductor company led by former top Broadcom engineers with experience in both innovative design and in producing chips designed to be produced in the billions, according to company CEO Kurt Busch.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"aiCTX\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Faictx.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7340367511ce.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1333983\">Baidu Backs Neuromorphic IC Developer\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>MUNICH — Swiss startup aiCTX has closed a $1.5 million pre-A funding round from Baidu Ventures to develop commercial applications for its low-power neuromorphic computing and processor designs and enable what it calls “neuromorphic intelligence.” It is targeting low-power edge-computing embedded sensory processing systems.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Flexlogix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.flex-logix.com\u002Fnmax\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5a8f2c488a4c.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fflex-logix-has-two-paths-to-making-a-lot-of-money-challenging-nvidia-in-ai\u002F\">Flex Logix has two paths to making a lot of money challenging Nvidia in AI\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>The programmable chip company scores $55 million in venture backing, bringing its total haul to $82 million\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"PFN\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fprojects.preferred.jp\u002Fmn-core\u002Fen\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1018bf414725.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.preferred-networks.jp\u002Fen\u002Fnews\">Preferred Networks develops a custom deep learning processor MN-Core for use in MN-3, a new large-scale cluster, in spring 2020\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Dec. 12, 2018, Tokyo Japan – Preferred Networks, Inc. (“PFN”, Head Office: Tokyo, President & CEO: Toru Nishikawa) announces that it is developing MN-Core (TM), a processor dedicated to deep learning and will exhibit this independently developed hardware for deep learning, including the MN-Core chip, board, and server, at the SEMICON Japan 2018, held at Tokyo Big Site. \n \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Cornami\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fcornami.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4c30993a20ad.jpg\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fai-startup-cornami-reveals-details-of-neural-net-chip\u002F\">AI Startup Cornami reveals details of neural net chip\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Stealth startup Cornami on Thursday revealed some details of its novel approach to chip design to run neural networks. CTO Paul Masters says the chip will finally realize the best aspects of a technology first seen in the 1970s. \n \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Anaflash\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fanaflash.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_739f9376db77.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.smart2zero.com\u002Fnews\u002Fai-chip-startup-offers-new-edge-computing-solution\">AI chip startup offers new edge computing solution\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Anaflash Inc. (San Jose, CA) is a startup company that has developed a test chip to demonstrate analog neurocomputing taking place inside logic-compatible embedded flash memory. \n \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Optalysys\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.optalysys.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a049a3d12869.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.globenewswire.com\u002Fnews-release\u002F2019\u002F03\u002F07\u002F1749510\u002F0\u002Fen\u002FOptalysys-launches-world-s-first-commercial-optical-processing-system-the-FT-X-2000.html\">Optalysys launches world’s first commercial optical processing system, the FT:X 2000\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Optalysys develops Optical Co-processing technology which enables new levels of processing capability delivered with a vastly reduced energy consumption compared with conventional computers. Its first coprocessor is based on an established diffractive optical approach that uses the photons of low-power laser light instead of conventional electricity and its electrons. This inherently parallel technology is highly scalable and is the new paradigm of computing. \n \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"etacompute\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fetacompute.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_b4186d563ba2.png\" height=\"80\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fspectrum.ieee.org\u002Ftech-talk\u002Fsemiconductors\u002Fprocessors\u002Flowpower-ai-startup-eta-compute-delivers-first-commercial-chips\">Low-Power AI Startup Eta Compute Delivers First Commercial Chips\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The firm pivoted away from riskier spiking neural networks using a new power management scheme\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fspectrum.ieee.org\u002Ftech-talk\u002Fsemiconductors\u002Fprocessors\u002Feta-compute-debuts-spiking-neural-network-chip-for-edge-ai\">Eta Compute Debuts Spiking Neural Network Chip for Edge AI\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Chip can learn on its own and inference at 100-microwatt scale, says company at Arm TechCon.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Achronix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.achronix.com\u002Fproduct\u002Fspeedster7t\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_eed09a32dac1.png\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1334717\">Achronix Rolls 7-nm FPGAs for AI\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Achronix is back in the game of providing full-fledged FPGAs with a new high-end 7-nm family, joining the Gold Rush of silicon to accelerate deep learning. It aims to leverage novel design of its AI block, a new on-chip network, and use of GDDR6 memory to provide similar performance at a lower cost than larger rivals Intel and Xilinx.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Areanna\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fareanna-ai.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_3c88eb7bd4c8.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1334947#\">Startup Runs AI in Novel SRAM\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Areanna is the latest example of an explosion of new architectures spawned by the rise of deep learning. The debut of a whole new approach to computing has fired imaginations of engineers around the industry hoping to be the next Hewlett and Packard.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Neuroblade\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.neuroblade.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7f150f5c3a57.png\" height=\"120\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetasia.com\u002Fnews\u002Farticle\u002FNeuroBlade-Preps-Inference-Chip\">NeuroBlade Preps Inference Chip\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Add NeuroBlade to the dozens of startups working on AI silicon. The Israeli company just closed a $23 million Series A, led by the founder of Check Point Software and with participation from Intel Capital.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Luminous\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.luminouscomputing.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7befc65c9954.png\" height=\"90\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.technologyreview.com\u002Fs\u002F613668\u002Fai-chips-uses-optical-semiconductor-machine-learning\u002F\">Bill Gates just backed a chip startup that uses light to turbocharge AI\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Luminous Computing has developed an optical microchip that runs AI models much faster than other semiconductors while using less power.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Efinix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.efinixinc.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_31ff09e90246.png\" height=\"25\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fchip-startup-efinix-hopes-to-bootstrap-ai-efforts-in-iot\u002F\">Chip startup Efinix hopes to bootstrap AI efforts in IoT\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Six-year-old startup Efinix has created an intriguing twist on the FPGA technology dominated by Intel and Xiliinx; the company hopes its energy-efficient chips will bootstrap the market for embedded AI in the Internet of Things.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"AIstorm\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Faistorm.ai\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ec0742a88205.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2019\u002F02\u002F11\u002Faistorm-raises-13-2-million-for-ai-edge-computing-chips\u002F\">AIStorm raises $13.2 million for AI edge computing chips\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>David Schie, a former senior executive at Maxim, Micrel, and Semtech, thinks both markets are ripe for disruption. He — along with WSI, Toshiba, and Arm veterans Robert Barker, Andreas Sibrai, and Cesar Matias — in 2011 cofounded AIStorm, a San Jose-based artificial intelligence (AI) startup that develops chipsets that can directly process data from wearables, handsets, automotive devices, smart speakers, and other internet of things (IoT) devices. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cp>\u003Ca name=\"SiMa\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fsima.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a4291b4e6218.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.businesswire.com\u002Fnews\u002Fhome\u002F20200512005313\u002Fen\u002FSiMa.ai-Raises-30-Million-Series-Investment-Led\">SiMa.ai Raises $30 Million in Series A Investment Round Led by Dell Technologies Capital\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SAN JOSE, Calif.--(BUSINESS WIRE)--SiMa.ai, the company enabling high performance machine learning to go green, today announced its Machine Learning SoC (MLSoC) platform – the industry’s first unified solution to support traditional compute with high performance, lowest power, safe and secure machine learning inference. Delivering the highest frames per second per watt, SiMa.ai’s MLSoC is the first machine learning platform to break the 1000 FPS\u002FW barrier for ResNet-501. In customer engagements, the company has demonstrated 10-30x improvement in FPS\u002FW through its automated software flow across a wide range of embedded edge applications, over today’s competing solutions. The platform will provide machine learning solutions that range from 50 TOPs@5W to 200 TOPs@20W, delivering an industry first of 10 TOPs\u002FW for high performance inference.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.businesswire.com\u002Fnews\u002Fhome\u002F20191022005079\u002Fen\u002FSiMa.ai%E2%84%A2-Introduces-MLSoC%E2%84%A2\">SiMa.ai™ Introduces MLSoC™ – First Machine Learning Platform to Break 1000 FPS\u002FW Barrier with 10-30x Improvement over Alternative Solutions\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SiMa.ai, the company enabling high performance machine learning to go green, today announced its Machine Learning SoC (MLSoC) platform – the industry’s first unified solution to support traditional compute with high performance, lowest power, safe and secure machine learning inference. Delivering the highest frames per second per watt, SiMa.ai’s MLSoC is the first machine learning platform to break the 1000 FPS\u002FW barrier for ResNet-501. In customer engagements, the company has demonstrated 10-30x improvement in FPS\u002FW through its automated software flow across a wide range of embedded edge applications, over today’s competing solutions. The platform will provide machine learning solutions that range from 50 TOPs@5W to 200 TOPs@20W, delivering an industry first of 10 TOPs\u002FW for high performance inference.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Untether\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Funtether.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_998f80da7af2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2021\u002F07\u002F20\u002Funtether-ai-nabs-125m-for-ai-acceleration-chips\u002F\">Untether AI nabs $125M for AI acceleration chips\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Untether AI, a startup developing custom-built chips for AI inferencing workloads, today announced it has raised $125 million from Tracker Capital Management and Intel Capital. The round, which was oversubscribed and included participation from Canada Pension Plan Investment Board and Radical Ventures, will be used to support customer expansion.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cp>\u003Ca name=\"GrAI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.graimatterlabs.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_deab494a1d0f.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2019\u002F09\u002F18\u002Fgrai-matter-labs-reveals-neuronflow-technology-and-announces-graiflow-sdk\u002F\">GrAI Matter Labs Reveals NeuronFlow Technology and Announces GrAIFlow SDK\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>GrAI Matter Labs (aka GML), a neuromorphic computing pioneer today revealed NeuronFlow – a new programmable processor technology – and announced an early access program to its GrAIFlow software development kit.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Rain\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Frain-neuromorphics.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_aaf566b782d2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.crunchbase.com\u002Forganization\u002Frain-neuromorphics\">Rain Neuromorphics on Crunchbase\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>We build artificial intelligence processors, inspired by the brain. Our mission is to enable brain-scale intelligence.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"ABR\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fappliedbrainresearch.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ea38b40deabe.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.crunchbase.com\u002Forganization\u002Fapplied-brain-research\">Applied Brain Research on Crunchbase\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>ABR makes the world's most advanced neuromoprhic compiler, runtime and libraries for the emerging space of neuromorphic computing.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Xmos\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.xmos.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_515efbca5ebb.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fxmos-adapts-xcore-into-aiot-crossover-processor\u002F\">XMOS adapts Xcore into AIoT ‘crossover processor’\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>EE Times exclusive! The new chip targets AI-powered voice interfaces in IoT devices — “the most important AI workload at the endpoint.”\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2020\u002F02\u002F12\u002Fxmos-unveils-xcore-ai-a-powerful-chip-designed-for-ai-processing-at-the-edge\u002F\">XMOS unveils Xcore.ai, a powerful chip designed for AI processing at the edge\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The latest xcore.ai is a crossover chip designed to deliver high-performance AI, digital signal processing, control, and input\u002Foutput in a single device with prices from $1.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"DinoplusAI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fdinoplus.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_70e58ebe8f38.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>We design and produce AI processors and the software to run them in data centers. Our unique approach optimizes for inference with the focus on performance, power efficiency, and ease of use; and at the same time our approach enables cost-effective training. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Furiosa\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.furiosa.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ea697849018f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>We build high-performance AI inference coprocessors that can be seamlessly integrated into various computing platforms including data centers, servers, desktops, automobiles and robots. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Corerain\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.corerain.com\u002Fen\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_365851b13dd9.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>Corerain provides ultra-high performance AI acceleration chips and the world's first streaming engine-based AI development platform.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Perceive\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fperceive.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_609b60445d34.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2020\u002F03\u002F31\u002Fperceive-emerges-from-stealth-with-ergo-edge-ai-chip\u002F\">Perceive emerges from stealth with Ergo edge AI chip\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>On-device computing solutions startup Perceive emerged from stealth today with its first product: the Ergo edge processor for AI inference. CEO Steve Teig claims the chip, which is designed for consumer devices like security cameras, connected appliances, and mobile phones, delivers “breakthrough” accuracy and performance in its class.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"SimpleMachines\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.simplemachines.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_f701d005198f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.design-reuse.com\u002Fnews\u002F49012\u002Fsimplemachines-ai-chip-tsmc-16nm.html\">SimpleMachines, Inc. Debuts First-of-its-Kind High Performance Chip\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>As traditional chip makers struggle to embrace the challenges presented by the rapidly evolving AI software landscape, a San Jose startup has announced it has working silicon and a whole new future-proof chip paradigm to address these issues.\n\nThe SimpleMachines, Inc. (SMI) team – which includes leading research scientists and industry heavyweights formerly of Qualcomm, Intel and Sun Microsystems – has created a first-of-its-kind easily programmable, high-performance chip that will accelerate a wide variety of AI and machine-learning applications. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Neureality\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.neureality.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_87779cc94152.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2022\u002F12\u002F06\u002Fneureality-ai-accelerator-chips-startup-raises-35m\u002F\">NeuReality lands $35M to bring AI accelerator chips to market\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>NeuReality, a startup developing AI inferencing accelerator chips, has raised $35 million in new venture capital.\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.electronicsmedia.info\u002F2021\u002F05\u002F06\u002Fneureality-unveiled-nr1-p-a-novel-ai-centric-inference-platform\u002F\">NeuReality unveiled NR1-P, A novel AI-centric inference platform\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>NeuReality has unveiled NR1-P, a novel AI-centric inference platform. NeuReality has already started demonstrating its AI-centric platform to customers and partners. NeuReality has redefined today’s outdated AI system architecture by developing an AI-centric inference platform based on a new type of System-on-Chip (SoC). \u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F02\u002F10\u002Fneureality-raises-8m-for-its-novel-ai-inferencing-platform\u002F\">NeuReality raises $8M for its novel AI inferencing platform\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>NeuReality, an Israeli AI hardware startup that is working on a novel approach to improving AI inferencing platforms by doing away with the current CPU-centric model, is coming out of stealth today and announcing an $8 million seed round. \u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca name=\"AnalogInference\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.analog-inference.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ca545698901d.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eenewsanalog.com\u002Fnews\u002Fanalog-inference-startup-raises-106-million\">Analog inference startup raises $10.6 million\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The company is backed by Khosla Ventures and is developing its first generation of products for AI computing at the edge. The company raised $4.5 million shortly after its formation in March 2018, so the latest tranche brings the total raised to-date to $15.1 million\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Quadric\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.quadric.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d51fda59e18f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002Foff-the-wire\u002Fquadric-announces-unified-silicon-and-software-platform-optimized-for-on-device-ai\u002F\">Quadric Announces Unified Silicon and Software Platform Optimized for On-Device AI\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>BURLINGAME, Calif., June 22, 2021 — Quadric (quadric.io), an innovator in high-performance edge processing, has introduced a unified silicon and software platform that unlocks the power of on-device AI. \u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca name=\"EdgeQ\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fedgeq.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2f6ae7c349fb.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F01\u002F26\u002Fedgeq-reveals-more-details-behind-its-next-gen-5g-ai-chip\u002F\">EdgeQ reveals more details behind its next-gen 5G\u002FAI chip\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>5G is the current revolution in wireless technology, and every chip company old and new is trying to burrow their way into this ultra-competitive — but extremely lucrative — market. One of the most interesting new players in the space is EdgeQ, a startup with a strong technical pedigree via Qualcomm that we covered last year after it raised a nearly $40 million Series A.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Innatera\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.innatera.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a92a6c2fb5be.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Finnatera-unveils-neuromorphic-ai-chip-to-accelerate-spiking-networks\u002F\">Innatera Unveils Neuromorphic AI Chip to Accelerate Spiking Networks\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Innatera, the Dutch startup making neuromorphic AI accelerators for spiking neural networks, has produced its first chips, gauged their performance, and revealed details of their architecture.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Ceremorphic\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fceremorphic.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_19670e43a1c2.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fredpine-founder-launches-ai-processor-startup\u002F\">Redpine Founder Launches AI Processor Startup\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Ceremorphic, an AI chip startup emerging from stealth mode this week, is readying a heterogeneous AI processor aimed at model training in data centers, automotive, high-performance computing, robotics and other emerging applications.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Aspinity\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.aspinity.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5efe07be3c22.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fembeddedcomputing.com\u002Ftechnology\u002Fanalog-and-power\u002Fanalog-semicundoctors-sensors\u002Faspinity-analog-ml-chip-allows-battery-powered-always-on\">Aspinity Analog ML Chip Allows Battery-Powered “Always On”\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Machine learning (ML) is all about massive amounts of processing, DSP, etc., right? Maybe not, according to the team at Aspinity. The company continues to push ahead on the analog front. The latest member of the company’s analogML family, the AML100, operates completely in the analog domain. As a result, it can reduce always-on system power by 95% (for the record, we had to walk through this a couple of times before I believed them).\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Teramem\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.tetramem.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_49d99193ce43.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.tetramem.com\u002Fposts\u002FTetraMem-Technology-Debut-at-Linley\">TetraMem enjoyed an exciting public debut of our analog in-memory compute technology at the Linley Spring 2022 Processor Conference.\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"d-matrix\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.d-matrix.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d54084c9c0b1.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"www.reuters.com\u002Ftechnology\u002Fai-chip-startup-d-matrix-raises-110-mln-with-backing-microsoft-2023-09-06\u002F\">Exclusive: AI chip startup d-Matrix raises $110 million with backing from Microsoft\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Sept 6 (Reuters) - Silicon Valley-based artificial intelligence chip startup d-Matrix has raised $110 million from investors that include Microsoft Corp (MSFT.O) at a time when many chip companies are struggling to raise cash.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Fkarlfreund\u002F2022\u002F06\u002F21\u002Fd-matrix-ai-chip-promises-efficient-transformer-processing\u002F\">D-Matrix AI chip promises efficient transformer processing\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The startup combines digital in-memory compute and chiplet implementations for data-center-grade inference.\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"AIChipCompilers\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>AI Chip Compilers\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n1. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fglow\">pytorch\u002Fglow\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"https:\u002F\u002Ftvm.ai\u002F\">TVM:End to End Deep Learning Compiler Stack\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"https:\u002F\u002Fwww.tensorflow.org\u002Fxla\">Google Tensorflow XLA\u003C\u002Fa>\u003Cbr>\n4. \u003Ca href=\"https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt\">Nvidia TensorRT\u003C\u002Fa>\u003Cbr>\n5. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fplaidml\u002Fplaidml\">PlaidML\u003C\u002Fa>\u003Cbr>\n6. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fngraph\">nGraph\u003C\u002Fa>\u003Cbr>\n7. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTiramisu-Compiler\u002Ftiramisu\">MIT Tiramisu compiler\u003C\u002Fa>\u003Cbr>\n8. \u003Ca href=\"https:\u002F\u002Fonnc.ai\u002F\">ONNC (Open Neural Network Compiler)\u003C\u002Fa>\u003Cbr>\n9. \u003Ca href=\"https:\u002F\u002Fmlir.llvm.org\u002F\">MLIR: Multi-Level Intermediate Representation\u003C\u002Fa>\u003Cbr>\n10. \u003Ca href=\"http:\u002F\u002Ftensor-compiler.org\u002F\">The Tensor Algebra Compiler (taco)\u003C\u002Fa>\u003Cbr>\n11. \u003Ca href=\"https:\u002F\u002Ffacebookresearch.github.io\u002FTensorComprehensions\u002F\">Tensor Comprehensions\u003C\u002Fa>\u003Cbr>\n12. \u003Ca href=\"https:\u002F\u002Fwww.polymagelabs.com\u002F\u002F\">PolyMage Labs\u003C\u002Fa>\u003Cbr>\n13. \u003Ca href=\"https:\u002F\u002Foctoml.ai\u002F\">OctoML\u003C\u002Fa>\u003Cbr>\n14. \u003Ca href=\"https:\u002F\u002Fwww.modular.com\u002F\">Modular AI\u003C\u002Fa>\u003Cbr>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"AIChipBenchmarks\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>AI Chip Benchmarks\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n\n1. \u003Ca href=\"https:\u002F\u002Fdawn.cs.stanford.edu\u002Fbenchmark\u002Findex.html\">DAWNBench:An End-to-End Deep Learning Benchmark and Competition Image Classification (ImageNet)\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Frdadolf\u002Ffathom\">Fathom:Reference workloads for modern deep learning methods\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"https:\u002F\u002Fmlperf.org\u002F\">MLPerf:A broad ML benchmark suite for measuring performance of ML software frameworks, ML hardware accelerators, and ML cloud platforms\u003C\u002Fa>. \n\u003Cstrong>You can find latest MLPerf results: training 2.1, HPC 2.0, inference tiny 1.0 \u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-training-4q2022\u002F\">here.\u003C\u002Fa>\u003C\u002Fstrong>. \u003Cbr>\n\u003Cstrong>You can find MLPerf inference results v2.1 \u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-inference-v21\u002F\">here.\u003C\u002Fa>\u003C\u002Fstrong>. \u003Cbr>\n\u003Cstrong>You can find MLPerf training results v1.0 \u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-training-2q2022\u002F\">here.\u003C\u002Fa>\u003C\u002Fstrong>. \u003Cbr>\n\n4. \u003Ca href=\"https:\u002F\u002Faimatrix.ai\u002Fen-us\u002Findex.html\">AI Matrix\u003C\u002Fa>\u003Cbr>\n5. \u003Ca href=\"http:\u002F\u002Fai-benchmark.com\u002Findex.html\">AI-Benchmark\u003C\u002Fa>\u003Cbr>\n6. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAIIABenchmark\u002FAIIA-DNN-benchmark\">AIIABenchmark\u003C\u002Fa>\u003Cbr>\n7. \u003Ca href=\"https:\u002F\u002Fwww.eembc.org\u002Fmlmark\u002F\">EEMBC MLMark Benchmark\u003C\u002Fa>\u003Cbr>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Reference\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>Reference\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n      \n1. \u003Ca href=\"https:\u002F\u002Fmeanderful.blogspot.jp\u002F2017\u002F06\u002Ffpgas-and-ai-processors-dnn-and-cnn-for.html\">FPGAs and AI processors: DNN and CNN for all\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"http:\u002F\u002Fwww.nanalyze.com\u002F2017\u002F05\u002F12-ai-hardware-startups-new-ai-chips\u002F\">12 AI Hardware Startups Building New AI Chips\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"http:\u002F\u002Feyeriss.mit.edu\u002Ftutorial.html\">Tutorial on Hardware Architectures for Deep Neural Networks\u003C\u002Fa>\u003Cbr>\n4. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnicsefc.ee.tsinghua.edu.cn\u002Fprojects\u002Fneural-network-accelerator\u002F\">Neural Network Accelerator Comparison\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n5. \"White Paper on AI Chip Technologies 2018\". You can download it from \u003Ca href=\"https:\u002F\u002Fcloud.tsinghua.edu.cn\u002Ff\u002F9aa0a4f0a5684cc48495\u002F?dl=1\">here\u003C\u002Fa>, or \u003Ca href=\"https:\u002F\u002Fdrive.google.com\u002Fopen?id=1ieDm0bpjVWl5MnSESRs92EcmoSzG5vcm\">Google drive.\u003C\u002Fa>\u003Cbr>\n5. \u003Cstrong>\"What We Talk About When We Talk About AI Chip\". \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FSbX5yz5d3GXaLcl15DO6OQ\">#1\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FzvgDgKpIMIRLFUEW0fFOeg\">#2\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FCKHs5yblcMur4h2BwUBICw\">#3\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FhFnHhaWWYTFRUsD3HlMbLw\">#4\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n6. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fbirenresearch.github.io\u002FAIChip_Paper_List\u002F\">AI Chip Paper List\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n7. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fkhairy2011.medium.com\u002Ftpu-vs-gpu-vs-cerebras-vs-graphcore-a-fair-comparison-between-ml-hardware-3f5a19d89e38\">TPU vs GPU vs Cerebras vs Graphcore: A Fair Comparison between ML Hardware\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"http:\u002F\u002Fwww.reliablecounter.com\" target=\"_blank\">\u003Cimg src=\"http:\u002F\u002Fwww.reliablecounter.com\u002Fcount.php?page=https:\u002F\u002Fbasicmi.github.io\u002FDeep-Learning-Processor-List\u002F&digit=style\u002Fplain\u002F3\u002F&reloads=1\" alt=\"laptop\" title=\"laptop\" border=\"0\">\u003C\u002Fa>\n\u003C\u002Fdiv>\n","\u003Cdiv align=\"center\">\u003Ch1>AI芯片（ICs和IPs）\u003C\u002Fh1>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_83ff08c0f8b5.png\">\u003C\u002Fdiv>\n\u003Cbr>\n\u003Cdiv align=\"center\">编辑 \u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fshan-tang-27342510\u002F\">\u003Cstrong>S.T.\u003C\u002Fstrong>\u003C\u002Fa>（领英）\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cstrong>欢迎访问我的微信公众号 \u003Ca href=\"[https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FaxfIBbQBDhTJ2Zt7U5WQBw](https:\u002F\u002Fmp.weixin.qq.com\u002Fmp\u002Fappmsgalbum?action=getalbum&__biz=MzI3MDQ2MjA3OA==&scene=1&album_id=1374108991751782402&count=3#wechat_redirect)\">StarryHeavensAbove\u003C\u002Fa>，获取更多AI芯片相关文章\u003C\u002Fstrong>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cstrong>欢迎访问我的微信公众号 \u003Ca href=\"[https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FaxfIBbQBDhTJ2Zt7U5WQBw](https:\u002F\u002Fmp.weixin.qq.com\u002Fmp\u002Fappmsgalbum?action=getalbum&__biz=MzI3MDQ2MjA3OA==&scene=1&album_id=1374108991751782402&count=3#wechat_redirect)\">StarryHeavensAbove\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_fec5ee2511dc.jpg\" height=\"100\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a38aee112c7a.png\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n \n\u003Cdiv align=\"center\">\u003Ch2>最新更新\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\n\u003Cfont color=\"Darkred\">\n\u003Cul>\n\u003Cli>新增\u003Ca href=\"#SambaNova\">SambaNova\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Groq\">Groq\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Qualcomm\">高通\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Nvidia\">英伟达\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>添加了指向\u003Ca href=\"#AIChipBenchmarks\">MLCommons最新MLPerf测试结果\u003C\u002Fa>的链接。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#IBM\">IBM AIU\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Tesla\">特斯拉Dojo\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>添加了指向\u003Ca href=\"#AIChipBenchmarks\">MLCommons最新MLPerf测试结果\u003C\u002Fa>的链接。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Tachyum\">Tachyum Prodigy通用处理器\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Habana\">英特尔Habana Gaudi®2\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Modular\">Modular AI\u003C\u002Fa>（位于AI编译器部分）。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Teramem\">TeraMem\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Aspinity\">Aspinity\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Synopsys\">新思科技DesignWare ARC NPX6 NPU IP\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Nvidia\">英伟达Hopper\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Graphcore\">Graphcore\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Ceremorphic\">Ceremorphic\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Lightelligence\">Lightelligence\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>添加了指向\u003Ca href=\"#AIChipBenchmarks\">MLCommons最新MLPerf测试结果\u003C\u002Fa>的链接。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Habana\">Habana\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Google\">谷歌Tensor芯片\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Intel\">英特尔Loihi 2\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Tesla\">特斯拉Dojo\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Untether\">Untether AI\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Innatera\">Innatera Nanosystems\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#EdgeQ\">EdgeQ\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Quadric\">Quadric\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#AnalogInference\">Analog Inference\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Tenstorrent\">Tenstorrent\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Google\">谷歌\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#SiMa\">SiMa.ai\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增初创公司\u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Groq\">Groq\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#Nvidia\">英伟达\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003Cli>新增\u003Ca href=\"#SambaNova\">SambaNova\u003C\u002Fa>的消息。\u003C\u002Fli>\n\u003C\u002Ful>\n\u003C\u002Ffont>\n\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>快捷方式\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\u003Ctable style=\"width:100%\">\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#IC_Vendors\">集成电路供应商\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#Intel\">英特尔\u003C\u002Fa>, \u003Ca href=\"#Qualcomm\">高通\u003C\u002Fa>, \u003Ca href=\"#Nvidia\">英伟达\u003C\u002Fa>, \u003Ca href=\"#Samsung\">三星\u003C\u002Fa>, \u003Ca href=\"#AMD\">AMD\u003C\u002Fa>,\u003Ca href=\"#IBM\">IBM\u003C\u002Fa>, \u003Ca href=\"#Marvell\">迈威尔\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#Tech_Giants\">科技巨头与高性能计算供应商\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#Google\">谷歌\u003C\u002Fa>, \u003Ca href=\"#Amazon_AWS\">亚马逊AWS\u003C\u002Fa>, \u003Ca href=\"#Microsoft\">微软\u003C\u002Fa>, \u003Ca href=\"#Apple\">苹果\u003C\u002Fa>, \u003Ca href=\"#Alibaba\">阿里巴巴集团\u003C\u002Fa>, \u003Ca href=\"#Tencent_Cloud\">腾讯云\u003C\u002Fa>, \u003Ca href=\"#Baidu\">百度\u003C\u002Fa>, \u003Ca href=\"#Fujitsu\">富士通\u003C\u002Fa>, \u003Ca href=\"#Nokia\">诺基亚\u003C\u002Fa>, \u003Ca href=\"#Facebook\">脸书\u003C\u002Fa>, \u003Ca href=\"#Tesla\">特斯拉\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Cth>\u003Ca href=\"#IP_Vendors\">IP供应商\u003C\u002Fa>\u003C\u002Fth>\u003Ctd>\u003Ca href=\"#ARM\">ARM\u003C\u002Fa>, \u003Ca href=\"#Synopsys\">新思科技\u003C\u002Fa>, \u003Ca href=\"#Imagination\">Imagination Technologies\u003C\u002Fa>, \u003Ca href=\"#CEVA\">CEVA\u003C\u002Fa>, \u003Ca href=\"#Cadence\">楷登电子\u003C\u002Fa>, \u003Ca href=\"#VeriSilicon\">VeriSilicon\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>  \n    \u003Cth>\u003Ca href=\"#Startups_Worldwide\">初创公司\u003C\u002Fa>\u003C\u002Fth>\n    \u003Ctd>\u003Ca href=\"#Cerebras\">Cerebras\u003C\u002Fa>, \u003Ca href=\"#Graphcore\">Graphcore\u003C\u002Fa>, \u003Ca href=\"#Tenstorrent\">Tenstorrent\u003C\u002Fa>, \u003Ca href=\"#Blaize\">Blaize\u003C\u002Fa>, \u003Ca href=\"#Koniku\">Koniku\u003C\u002Fa>, \u003Ca href=\"#Adapteva\">Adapteva\u003C\u002Fa>, \u003Ca href=\"#Mythic\">Mythic\u003C\u002Fa>, \u003Ca href=\"#Brainchip\">BrainChip\u003C\u002Fa>, \u003Ca href=\"#Leepmind\">Leepmind\u003C\u002Fa>, \u003Ca href=\"#Groq\">Groq\u003C\u002Fa>, \u003Ca href=\"#Kneron\">Kneron\u003C\u002Fa>, \u003Ca href=\"#Esperanto\">Esperanto Technologies\u003C\u002Fa>, \u003Ca href=\"#GTI\">Gyrfalcon Technology\u003C\u002Fa>, \u003Ca href=\"#SambaNova\">SambaNova Systems\u003C\u002Fa>, \u003Ca href=\"#GreenWaves\">GreenWaves Technology\u003C\u002Fa>, \u003Ca href=\"#Lightelligence\">Lightelligence\u003C\u002Fa>, \u003Ca href=\"#Lightmatter\">Lightmatter\u003C\u002Fa>, \u003Ca href=\"#Hailo\">Hailo\u003C\u002Fa>,\u003Ca href=\"#Tachyum\">Tachyum\u003C\u002Fa>,\u003Ca href=\"#Alphaics\">AlphaICs\u003C\u002Fa>,\u003Ca href=\"#Syntiant\">Syntiant\u003C\u002Fa>, \u003Ca href=\"#aiCTX\">aiCTX\u003C\u002Fa>, \u003Ca href=\"#Flexlogix\">Flex Logix\u003C\u002Fa>, \u003Ca href=\"#PFN\">Preferred Network\u003C\u002Fa>, \u003Ca href=\"#Cornami\">Cornami\u003C\u002Fa>, \u003Ca href=\"#Anaflash\">Anaflash\u003C\u002Fa>, \u003Ca href=\"#Optalysys\">Optaylsys\u003C\u002Fa>, \u003Ca href=\"#etacompute\">Eta Compute\u003C\u002Fa>, \u003Ca href=\"#Achronix\">Achronix\u003C\u002Fa>, \u003Ca href=\"#Areanna\">Areanna AI\u003C\u002Fa>, \u003Ca href=\"#Neuroblade\">Neuroblade\u003C\u002Fa>, \u003Ca href=\"#Luminous\">Luminous Computing\u003C\u002Fa>, \u003Ca href=\"#Efinix\">Efinix\u003C\u002Fa>, \u003Ca href=\"#AIstorm\">AISTORM\u003C\u002Fa>, \u003Ca href=\"#SiMa\">SiMa.ai\u003C\u002Fa>,\u003Ca href=\"#Untether\">Untether AI\u003C\u002Fa>, \u003Ca href=\"#GrAI\">GrAI Matter Lab\u003C\u002Fa>, \u003Ca href=\"#Rain\">Rain Neuromorphics\u003C\u002Fa>, \u003Ca href=\"#ABR\">Applied Brain Research\u003C\u002Fa>, \u003Ca href=\"#Xmos\">XMOS\u003C\u002Fa>, \u003Ca href=\"#DinoplusAI\">DinoPlusAI\u003C\u002Fa>, \u003Ca href=\"#Furiosa\">Furiosa AI\u003C\u002Fa>, \u003Ca href=\"#Perceive\">Perceive\u003C\u002Fa>, \u003Ca href=\"#SimpleMachines\">SimpleMachines\u003C\u002Fa>, \u003Ca href=\"#Neureality\">Neureality\u003C\u002Fa>, \u003Ca href=\"#AnalogInference\">Analog Inference\u003C\u002Fa>, \u003Ca href=\"#Quadric\">Quadric\u003C\u002Fa>, \u003Ca href=\"#EdgeQ\">EdgeQ\u003C\u002Fa>, \u003Ca href=\"#Innatera\">Innatera Nanosystems\u003C\u002Fa>, \u003Ca href=\"#Ceremorphic\">Ceremorphic\u003C\u002Fa>, \u003Ca href=\"#Aspinity\">Aspinity\u003C\u002Fa>, \u003Ca href=\"#Teramem\">TeraMem, \u003Ca href=\"#d-matrix\">d-Matrix\u003C\u002Fa>\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"IC_Vendors\">\u003C\u002Fa>I. 集成电路供应商\u003C\u002Fh2>\u003C\u002Fdiv>\n\u003CHR>\n\u003Cdiv align=\"center\">\u003Ch1> \u003C\u002Fh1>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Nvidia\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_74f7256a47eb.png\" height=\"50\"> \u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3>GPU\u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fnvidianews.nvidia.com\u002Fnews\u002Fnvidia-microsoft-accelerate-cloud-enterprise-ai\">英伟达携手微软打造超大规模云端人工智能计算机\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>数以万计的英伟达GPU、NVIDIA Quantum-2 InfiniBand以及完整的英伟达AI软件栈将入驻Azure；英伟达、微软及全球企业将利用该平台实现快速且经济高效的人工智能开发与部署。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fdeveloper.nvidia.com\u002Fblog\u002Fnvidia-hopper-architecture-in-depth\">深入解析英伟达Hopper架构\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>在2022年英伟达GTC主题演讲中，英伟达首席执行官黄仁勋发布了基于全新英伟达Hopper GPU架构的NVIDIA H100 Tensor Core GPU。本文将带您深入了解这款全新的H100 GPU，并介绍英伟达Hopper架构GPU的重要新特性。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16610\u002Fnvidia-unveils-grace-a-highperformance-arm-server-cpu-for-use-in-ai-systems\">英伟达发布Grace：用于大型AI系统的高性能Arm服务器CPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>在英伟达又一个繁忙的春季GPU技术大会拉开帷幕之际，这家图形和加速器设计公司今天上午宣布，他们将再次自主研发基于Arm架构的CPU\u002FSoC。这款名为“Grace”的芯片以计算机编程先驱、美国海军少将格蕾丝·霍珀命名，是英伟达进一步垂直整合其硬件产品线的最新尝试——通过提供高性能CPU来补充其常规GPU产品。据英伟达称，该芯片专为大规模神经网络工作负载而设计，预计将于2023年应用于英伟达的产品中。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Intel\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a7a82a1976a6.png\" height=\"60\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Ca name=\"Mobileye\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Ch3>Mobileye EyeQ\u003C\u002Fh3>\u003C\u002Fdiv>\n> Mobileye目前正在开发第五代SoC——\u003Ca href=\"https:\u002F\u002Fwww.mobileye.com\u002Four-technology\u002Fevolution-eyeq-chip\u002F\">EyeQ®5\u003C\u002Fa>——作为视觉中央处理器，用于实现完全自动驾驶（Level 5）车辆的传感器融合，计划于2020年投入道路使用。为满足功耗和性能目标，EyeQ® SoC采用最先进的VLSI工艺节点进行设计，第五代甚至达到了7nm FinFET工艺。\n\n\u003Ca name=\"Loihi 2\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Ch3>Loihi\u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fnewsroom\u002Fnews\u002Fintel-unveils-neuromorphic-loihi-2-lava-software.html\">英特尔推出Loihi 2、全新Lava软件框架及新合作伙伴，推动神经拟态计算发展\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>第二代研究芯片采用预量产的Intel 4工艺，规模扩大至100万个神经元。英特尔还推出了开放的软件框架，以加速开发者创新并推动商业化进程。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Habana\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ch3>Habana\u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.intel.com\u002Fcontent\u002Fwww\u002Fus\u002Fen\u002Fnewsroom\u002Fnews\u002Fvision-2022-habana-gaudi2-greco.html\">英特尔旗下 Habana Labs 推出用于训练和推理的第二代 AI 处理器\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>今天在 Intel Vision 大会上，英特尔宣布其专注于 AI 深度学习处理器技术的数据中心团队 Habana Labs 推出了用于训练和推理的第二代深度学习处理器：Habana® Gaudi®2 和 Habana® Greco™。这些新处理器填补了行业空白，为客户提供高性能、高能效的深度学习计算选择，适用于数据中心中的训练工作负载和推理部署，同时降低了各规模企业的 AI 入门门槛。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fhabana.ai\u002Faws-launches-ec2-dl1-instances\u002F\">Habana Gaudi 登陆亚马逊 EC2 云平台\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>创建这一全新训练实例系列的主要动因由 Andy Jassy 在 2020 年的 re:Invent 大会上提出：“为我们的终端客户提供比当前一代基于 GPU 的实例高出 40% 的性价比。”\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Qualcomm\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_02281afe6520.png\" height=\"40\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Ca href=\"https:\u002F\u002Fwww-forbes-com.cdn.ampproject.org\u002Fc\u002Fs\u002Fwww.forbes.com\u002Fsites\u002Fkarlfreund\u002F2022\u002F11\u002F16\u002Fqualcomm-ups-the-snapgragon-ai-game\u002Famp\u002F\">高通提升骁龙 AI 游戏水平\u003C\u002Fa>\n\u003Cblockquote>\n  \u003Cp>这家高端移动 SoC 的领导者已将 AI 应用于整个平台。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.qualcomm.com\u002Fproducts\u002Ftechnology\u002Fprocessors\u002Fcloud-artificial-intelligence\u002Fcloud-ai-100\">Qualcomm Cloud AI 100\u003C\u002Fa>\u003C\u002Fstrong>\n\u003Cblockquote>\n  \u003Cp>专为 AI 推理加速而设计的 Qualcomm Cloud AI 100，满足云端的独特需求，包括能效、规模、制程节点进步以及信号处理等，从而帮助数据中心更快速、更高效地在边缘云上运行推理任务。Qualcomm Cloud AI 100 旨在成为日益依赖边缘云基础设施的数据中心的领先解决方案。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"Samsung\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ad6b3054b304.png\" height=\"35\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnews.samsung.com\u002Fglobal\u002Fsamsung-brings-on-device-ai-processing-for-premium-mobile-devices-with-exynos-9-series-9820-processor\">三星借助 Exynos 9 系列 9820 处理器为高端移动设备带来端侧 AI 处理能力\u003C\u002Fa>\u003C\u002Fstrong>\n> 第四代定制核心与 2.0Gbps LTE Advanced Pro 调制解调器，可支持 AR 和 VR 等丰富移动体验\n\n\u003Cbr> \n三星最近发布了“\u003Ca href=\"https:\u002F\u002Fnews.samsung.com\u002Fglobal\u002Fsamsung-optimizes-premium-exynos-9-series-9810-for-ai-applications-and-richer-multimedia-content\">新款 Exynos 9810 搭载 2.9GHz 定制 CPU、业界首款 6CA LTE 调制解调器以及深度学习处理能力，带来高端特性\u003C\u002Fa>”。\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca name=\"AMD\">\u003C\u002Fa>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_957f712e31f6.png\" height=\"35\">\u003C\u002Fdiv>\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n即将发布的 \u003Ca href=\"https:\u002F\u002Fwww.amd.com\u002Fen\u002Fgraphics\u002Finstinct-server-accelerators\">AMD Instinct™ MI 系列加速器\u003C\u002Fa>\n> AMD Instinct™ 加速器从零开始设计，专为数据中心计算的新时代打造，可大幅提升 HPC 和 AI 工作负载性能，推动新发现。AMD Instinct™ 加速器家族能够在任何规模的数据中心提供行业领先的性能，从单服务器解决方案到全球最大的超级计算机。凭借 AMD CDNA™ 2 架构、AMD Infinity Fabric™ 技术以及封装技术方面的创新，最新的 AMD Instinct™ 加速器旨在以百亿亿次级算力推动科学发现，帮助科学家应对我们最紧迫的挑战。\n\n\u003Cp>\u003Ca name=\"IBM\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5beb2661c40d.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fsystems\u002Fibm-telum-processor-the-next-gen-microprocessor-for-ibm-z-and-ibm-linuxone\u002F\">认识 IBM 人工智能单元\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>这是我们首款完整的片上系统，旨在比通用 CPU 更快、更高效地运行和训练深度学习模型。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fsystems\u002Fibm-telum-processor-the-next-gen-microprocessor-for-ibm-z-and-ibm-linuxone\u002F\">IBM Telum 处理器：面向 IBM Z 和 IBM LinuxONE 的下一代微处理器\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>这款 7nm 微处理器专为满足客户的需求而设计——在不牺牲高吞吐量事务性工作负载响应时间的前提下，从数据中获取基于 AI 的洞察力。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ibm.com\u002Fblogs\u002Fresearch\u002Ftag\u002Ftruenorth\u002F\">TrueNorth\u003C\u002Fa> 是 IBM 与 DARPA \u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FSyNAPSE\">SyNAPSE\u003C\u002Fa> 计划合作开发的神经形态 CMOS ASIC。\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>它是一种片上多核处理器网络架构，包含 4096 个核心，每个核心模拟 256 个可编程硅基“神经元”，总计超过一百万个神经元。相应地，每个神经元又拥有 256 个可编程“突触”，用于传递彼此之间的信号。因此，可编程突触的总数超过 2.68 亿（228）。就基本构建模块而言，其晶体管数量达到 54 亿。由于存储、计算和通信功能都集成在 4096 个神经突触核心中，TrueNorth 避免了冯·诺依曼架构的瓶颈，且非常节能，功耗仅为 70 毫瓦，约为传统微处理器功率密度的十万分之一。\u003Ca href=\"https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FTrueNorth\">维基百科\u003C\u002Fa>\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.research.ibm.com\u002Fartificial-intelligence\u002Fai-hardware-center\u002F\">AI 硬件中心\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>“IBM 研究院 AI 硬件中心是一个全球性的研究中心，总部位于纽约州奥尔巴尼。该中心致力于开发下一代芯片和系统，以支持 AI 实现其全部潜力所需的强大算力和前所未有的速度。”\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Marvell\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2bad2c36c703.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.marvell.com\u002Fproducts\u002Fdata-processing-units.html\">数据处理单元\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>基于业界首款、最具可扩展性且应用最广泛的七代数据基础设施处理器，Marvell的OCTEON™、OCTEON™ Fusion和ARMADA®平台专为无线基础设施、有线运营商网络、企业和云数据中心而优化。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"Tech_Giants\">\u003C\u002Fa>二、科技巨头与高性能计算供应商\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Google\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9e09b3d608f6.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fgoogle-tensor-everything-you-need-to-know-about-the-pixel-6-chip\u002F\">谷歌Tensor：关于Pixel 6芯片你需要知道的一切\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>谷歌发布了其最新的Pixel智能手机，在诸多变化中，对长期影响最大的一项就是这家搜索巨头转而采用自研芯片。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002F2021\u002F05\u002F20\u002Fgoogle-launches-tpu-v4-ai-chips\u002F\">谷歌推出TPU v4 AI芯片\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>本周在Google I\u002FO线上大会上，谷歌CEO桑达尔·皮查伊在其主题演讲中仅用1分42秒便介绍了公司最新的TPU v4张量处理单元，但这或许是此次活动中最重要、也是最受期待的消息。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fcloud.google.com\u002Ftpu\">Cloud TPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>机器学习已在网络安全、医学诊断等领域带来了商业和科研上的突破。我们打造了张量处理单元（TPU），旨在让每个人都能实现类似的突破。Cloud TPU是专为谷歌旗下产品如翻译、照片、搜索、助理和Gmail等设计的机器学习专用ASIC。以下介绍如何利用TPU和机器学习来加速贵公司的成功，尤其是在大规模场景下。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fcloud.google.com\u002Fedge-tpu\u002F\">Edge TPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>如今，人工智能已广泛应用于消费级和企业级场景。随着联网设备的爆炸式增长，以及对隐私\u002F机密性、低延迟和带宽限制的需求日益增加，云端训练的人工智能模型越来越需要在边缘端运行。Edge TPU是谷歌专门打造的用于在边缘运行AI的ASIC，它以小巧的体积和低功耗提供高性能，从而支持高精度AI在边缘部署。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>其他参考资料如下：\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fb22p26_delWfSpy9kDJKhA\">谷歌TPU3看点\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FKf_L4u7JRxJ8kF3Pi8M5iw\">谷歌TPU揭秘\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FlBQyNSNa6-joeLZ_Kq2W8A\">谷歌的神经网络处理器专利\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002Fg-BDlvSy-cx4AKItcWF7jQ\">脉动阵列——因谷歌TPU获得新生\u003C\u002Fa>\u003Cbr>\u003Cbr>\n\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fshould-we-all-embrace-systolic-arrays-chien-ping-lu\">我们都应该拥抱脉动阵列吗？\u003C\u002Fa>\u003Cbr>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Amazon_AWS\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a6c1d3963794.png\" height=\"50\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fmachine-learning\u002Ftrainium\u002F\">AWS Trainium\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AWS Trainium是AWS设计的第二款定制机器学习（ML）芯片，可在云端为深度学习模型训练提供最佳性价比。Trainium拥有最高的性能和最多的TFLOPS算力，能够实现亚马逊EC2中最快速的ML训练，并支持更广泛的ML应用场景。该芯片特别针对图像分类、语义搜索、翻译、语音识别、自然语言处理和推荐引擎等应用中的深度学习训练工作负载进行了优化。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Faws.amazon.com\u002Fcn\u002Fmachine-learning\u002Finferentia\u002F\">AWS Inferentia。由AWS定制设计的高性能机器学习推理芯片。\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AWS Inferentia以极低成本提供高吞吐量、低延迟的推理性能。每颗芯片可提供数百TOPS（每秒万亿次运算）的推理吞吐量，使复杂模型能够快速做出预测。若需更高性能，可将多颗AWS Inferentia芯片组合使用，以实现数千TOPS的吞吐量。AWS Inferentia将可用于Amazon SageMaker、Amazon EC2和Amazon Elastic Inference。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Microsoft\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a86fa93ff278.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Apple\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_cf1b0657dab9.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Alibaba\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2bd64dc8bf29.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmedium.com\u002Fsyncedreview\u002Falibabas-new-ai-chip-can-process-nearly-80k-images-per-second-63412dec22a3\">阿里巴巴新款AI芯片每秒可处理近8万张图片\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>在2019年阿里云飞天大会上，平头哥首次推出了面向云端大规模AI推理的专用AI处理器。含光800是阿里巴巴20年历史上的首款半导体产品。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tencent_Cloud\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_c4dc6b4e2279.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.datacenterdynamics.com\u002Fen\u002Fnews\u002Ftencent-reveals-three-data-center-chips-for-ai-video-transcoding-and-networking\u002F\">腾讯发布三款数据中心芯片：用于AI、视频转码和网络\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>该公司声称，Zixiao AI芯片的性能是同类竞品的两倍，视频转码芯片Canghai的性能提升了30%，而SmartNIC Xuanling则据称性能高出四倍。不过，该公司并未提供外部基准测试结果或具体的产品细节。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cbr \u002F>\n\u003Ca name=\"Baidu\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d68736ac9bad.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.reuters.com\u002Ftechnology\u002Fbaidu-says-2nd-gen-kunlun-ai-chips-enter-mass-production-2021-08-18\u002F\">百度表示第二代昆仑AI芯片已进入量产\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>中国科技巨头百度周三表示，已开始量产第二代昆仑人工智能（AI）芯片，以加速其在芯片行业的布局，而中国政府正致力于加强该领域的发展。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Fujitsu\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_b558b2bdbeec.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>富士通正在研发的这款DLU是从零开始设计的，并未基于Sparc或ARM指令集，而是采用了专为深度学习打造的全新指令集和数据格式，这些均是完全自主研发的成果。作为一家在高性能计算工作负载方面拥有丰富经验的日本IT巨头——其K超级计算机便是最佳例证——富士通并不认为HPC与AI架构会趋于融合。相反，该公司坚信这两类架构将走向分化，并各自需要高度专业化的功能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Nokia\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_368beff98e86.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>诺基亚为其5G网络解决方案开发了ReefShark系列芯片。在ReefShark的设计中融入了人工智能技术，将其嵌入基带处理模块，利用增强型深度学习驱动自主认知网络快速、智能地做出响应，从而提升网络优化效果并拓展商业机会。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Facebook\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9401f7667bc1.png\" height=\"50\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.reuters.com\u002Ftechnology\u002Ffacebook-developing-machine-learning-chip-information-2021-09-09\u002F\">Facebook正在研发机器学习芯片——The Information报道\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>The Information周四援引两位知情人士的消息称，Facebook公司（FB.O）正在开发一款用于处理内容推荐等任务的机器学习芯片。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tesla\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2881b6c5c985.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Fjamesmorris\u002F2022\u002F10\u002F06\u002Fteslas-biggest-news-at-ai-day-was-the-dojo-supercomputer-not-the-optimus-robot\u002F\">特斯拉AI日的最大亮点并非Optimus机器人，而是Dojo超级计算机\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>埃隆·马斯克在AI日期间将焦点放在了Optimus人形机器人上，试图吸引观众的关注。然而，尽管这款机器人若能以马斯克所宣称的价格（2万美元）实现大规模量产，确实可能对我们的生活和社会产生深远影响，但此次发布会中另一个部分的内容却具有更为直接且紧迫的影响——那就是关于Dojo超级计算机的最新进展。相比双足行走的机器人，Dojo超级计算机更有可能迅速改变世界。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsemianalysis.com\u002Ftesla-dojo-ai-super-computer-unique-packaging-and-chip-design-allow-an-order-magnitude-advantage-over-competing-ai-hardware\u002F\">特斯拉Dojo——独特的封装与芯片设计使其在性能上领先竞争对手一个数量级\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>特斯拉在其AI日活动中公开了其软硬件基础设施的内部运作机制，其中就包括此前备受期待的Dojo AI训练芯片。特斯拉宣称，其D1 Dojo芯片具备GPU级别的计算能力、CPU级别的灵活性，并配备了网络交换机IO接口。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"IP_Vendors\">\u003C\u002Fa>III. 传统IP供应商\u003C\u002Fa>\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"ARM\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1d3f209f5406.png\" height=\"30\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Ca href=\"https:\u002F\u002Fwww.arm.com\u002Fproducts\u002Fsilicon-ip-cpu\u002Fethos\u002Fethos-n78\">NPU ETHOS-N78\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>这款ML处理器专为边缘推理场景设计，可提供业界领先的4.6 TOPs性能，同时在移动设备和智能IP摄像头中展现出惊人的3 TOPs\u002FW能效比。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F12791\u002Farm-details-project-trillium-mlp-architecture\">ARM详解“Project Trillium”机器学习处理器架构\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Arm推出的第二代高扩展性、高效能NPU——Ethos-N78，能够支持全新的沉浸式应用，其单核性能提升了2.5倍，现可通过多核技术进一步扩展至1到10 TOP\u002Fs甚至更高。此外，它还提供了超过90种配置选项，灵活满足不同ML应用场景的需求。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Synopsys\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_43dafbf7ed4d.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnews.synopsys.com\u002F2022-04-19-Synopsys-Introduces-Industrys-Highest-Performance-Neural-Processor-IP\">新思科技推出业界性能最高的神经网络处理器IP\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>全新的DesignWare ARC NPX6 NPU IP为汽车、消费电子及数据中心芯片设计提供高达3,500 TOPS的性能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Imagination\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_778765511bd5.png\" height=\"60\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.imaginationtech.com\u002Fproducts\u002Fai\u002F\">AI处理器\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>无论您希望将智能嵌入掌上设备、消费类产品或工业机器人中，还是通过云端的强大服务器来实现，我们都能帮助您实现愿景。我们凭借PowerVR神经网络加速器（NNA）和GPU，为您的产品注入智能。我们的NC-SDK可实现AI加速在我们的硬件IP上的无缝部署，无论是单独使用还是与其他组件结合。我们的NNA采用可扩展架构，能够以最高效率支持从低功耗物联网设备到高性能RoboTaxi等各类智能边缘和终端设备。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"CEVA\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_c66247b85550.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.ceva-dsp.com\u002Fapp\u002Fdeep-learning\u002F\">面向实时嵌入式领域的深度学习\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>一种解决方案是在边缘端提供专用的低功耗深度学习AI处理器，并结合深度神经网络（DNN）图编译器。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"Cadence\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4c47686a880a.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.cadence.com\u002Fen_US\u002Fhome\u002Ftools\u002Fip\u002Ftensilica-ip\u002Ftensilica-ai-platform.html\">Tensilica AI平台\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"VeriSilicon\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_aae253b62444.png\" height=\"40\">\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.verisilicon.com\u002Fen\u002FIPPortfolio\u002FVivanteNPUIP\">Vivante® NPU IP\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>VeriSilicon的神经网络处理器（NPU）IP是一款高度可扩展、可编程的计算机视觉与人工智能处理器，支持终端设备、边缘设备及云端设备的AI运算升级。该IP专为满足不同芯片尺寸与功耗预算而设计，是一种经济高效、高质量的神经网络加速引擎解决方案。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca name=\"Startups\">\u003C\u002Fa>IV. 创业公司\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Cerebras\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_3e483f76b984.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002Fpress-release\u002Fcerebras-unveils-andromeda-a-13.5-million-core-ai-supercomputer-that-delivers-near-perfect-linear-scaling-for-large-language-models\">Cerebras发布Andromeda——一款拥有1350万核心的AI超级计算机，可为大型语言模型提供近乎完美的线性扩展能力\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Andromeda提供超过1 Exaflop的AI算力以及120 Petaflops的密集计算能力，是迄今为止建造的最大规模AI超级计算机之一，且使用极为简便。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.cerebras.net\u002Fblog\u002Fcerebras-sets-record-for-largest-ai-models-ever-trained-on-single-device\">Cerebras创下单个设备训练最大AI模型的新纪录\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>我们宣布了在单个设备上训练过的最大模型。借助Cerebras软件平台（CSoft），客户可以轻松地在单台CS-2系统上训练最先进的GPT语言模型（如GPT-3[i]和GPT-J[ii]），参数规模可达200亿。这些模型只需几分钟即可完成设置，用户仅需几次按键操作就能快速切换不同模型。相比之下，使用GPU集群则需要数月的工程工作。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F17061\u002Fcerebras-completes-series-f-funding-another-250m-for-4b-valuation\">Cerebras完成F轮融资，再获2.5亿美元，估值达40亿美元\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>新一轮F轮融资为公司带来了2.5亿美元的资金，使其通过风险投资累计筹集的资金总额达到7.2亿美元。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16626\u002Fcerebras-unveils-wafer-scale-engine-two-wse2-26-trillion-transistors-100-yield\">Cerebras发布晶圆级引擎二号（WSE2）：2.6万亿个晶体管，良率100%\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>两年前，Cerebras曾掀起一场硅片设计革命：推出了一款体积与人头相当的处理器，其占用的12英寸晶圆面积几乎达到了矩形设计所能允许的最大值，采用16nm工艺制造，专注于AI和HPC工作负载。如今，该公司推出了第二代产品，基于台积电7nm工艺打造，核心数量和各项指标均翻倍。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2019\u002F11\u002F19\u002Fthe-cerebras-cs-1-computes-deep-learning-ai-problems-by-being-bigger-bigger-and-bigger-than-any-other-chip\u002F\">Cerebras CS-1通过比任何其他芯片都更大、更大、更大的方式来解决深度学习AI问题\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>今天，该公司宣布推出面向最终用户的计算产品Cerebras CS-1，并同时宣布其首位客户——阿贡国家实验室。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Graphcore\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.graphcore.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_63a3e48f24a3.png\" height=\"70\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fgraphcore-supercharges-ipu-with-wafer-on-wafer\u002F\">Graphcore以晶圆对晶圆技术大幅提升IPU性能\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Graphcore发布了第三代智能处理单元（IPU），这是首款采用3D晶圆对晶圆（WoW）技术制造的处理器。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.graphcore.ai\u002Fmk2-benchmarks\">MK2性能基准测试\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2020\u002F02\u002F24\u002Fgraphcore-the-ai-chipmaker-raises-another-150m-at-a-1-95b-valuation\u002F\">Graphcore，这家人工智能芯片制造商，在19.5亿美元估值下再融资1.5亿美元\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>总部位于布里斯托尔的人工智能专用处理器初创公司Graphcore宣布，已再次筹集1.5亿美元资金，用于研发并继续拓展新客户。其估值现已达到19.5亿美元。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FCH9h8dUtoNK_2ZfkK5YU0g\">解密又一个xPU：Graphcore的IPU\u003C\u002Fa> 对其IPU架构进行了一些分析。\u003C\u002Fp>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FAMuqeaShqEv3DnibH3scEA\">Graphcore AI芯片：更多分析\u003C\u002Fa> 更多分析。\u003C\u002Fp>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FqP0zsSA7SQWXDqWGEAXmOg\">深度剖析AI芯片初创公司Graphcore的IPU\u003C\u002Fa> 在更多信息披露后进行了深入分析。\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tenstorrent\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Ftenstorrent.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4eb29e830cdc.png\" height=\"100\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Ftenstorrent-raises-over-200-million-at-1-billion-valuation-to-create-programmable-high-performance-ai-computers-301295913.html\">Tenstorrent以10亿美元估值筹集超过2亿美元，打造可编程高性能AI计算机\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>多伦多，2021年5月20日 \u002F美通社\u002F - 开发下一代计算机的硬件初创公司Tenstorrent今日宣布，在最新一轮融资中筹集了超过2亿美元，公司估值达10亿美元。本轮融资由富达管理与研究公司领投，Eclipse Ventures、Epic CG和Moore Capital等机构也参与投资。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.anandtech.com\u002Fshow\u002F16709\u002Fan-interview-with-tenstorrent-ceo-ljubisa-bajic-and-cto-jim-keller\">与Tenstorrent的对话：CEO Ljubisa Bajic和CTO Jim Keller\u003C\u002Fa>\u003C\u002Fp>\n \n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Blaize\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.blaize.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_234bc48ab85e.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fautomotive-ai-startup-blaize-closes-71-million-funding-round\u002F\">汽车AI初创公司Blaize完成7100万美元融资\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Blaize（前身为ThinCI）已完成7100万美元的D轮融资。新投资者富兰克林邓普顿以及现有投资者淡马锡领投了本轮融资，同时还有电装及其他新老投资者参与。至此，Blaize的总融资额已达约1.55亿美元。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Koniku\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fkoniku.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2a7c88b55b59.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>成立于2014年的加州纽瓦克初创公司\u003Ca href=\"http:\u002F\u002Fkoniku.io\u002F\">Koniku\u003C\u002Fa>迄今已获得165万美元融资，致力于成为“全球首家神经计算公司”。其理念是：既然大脑是迄今为止最强大的计算机，为何不对其进行逆向工程呢？听起来很简单，对吧？事实上，Koniku正在将生物神经元集成到芯片上，并已取得足够进展，据称阿斯利康已成为其客户。波音公司也已签署意向书，计划将该技术应用于化学物质探测无人机。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Adapteva\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.adapteva.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_8c95340c84dc.png\" height=\"70\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"http:\u002F\u002Fwww.adapteva.com\u002F\">Adapteva\u003C\u002Fa>已从包括移动巨头爱立信在内的投资者处筹集了510万美元资金。《\u003Ca href=\"http:\u002F\u002Fwww.parallella.org\u002Fdocs\u002Fe5_1024core_soc.pdf\">Epiphany-V：一款拥有1024个处理器的64位RISC片上系统\u003C\u002Fa>》一文详细介绍了Adapteva采用16nm FinFET工艺设计的1024核处理器芯片。\u003C\u002Fp>\n\n\u003Cp>\u003Ca name=\"Mythic\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fmythic.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ffe103ad5856.png\" height=\"20\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.linkedin.com\u002Fpulse\u002Fera-analog-compute-has-arrived-michael-b-henry\u002F\">模拟计算时代已经到来！\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>我们在原型模拟AI处理器中运行ResNet-50。量产版本将在3瓦功耗下实现900-1000帧\u002F秒的处理速度，并保持INT8精度。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2021\u002F06\u002F07\u002Fmythic-launches-analog-ai-processor-that-consumes-10-times-less-power\u002F\">Mythic推出功耗降低10倍的模拟AI处理器\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>模拟AI处理器公司Mythic今日发布了M1076模拟矩阵处理器，旨在提供低功耗的AI处理能力。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Brainchip\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.brainchipinc.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_80042cb380b0.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"hhttps:\u002F\u002Fventurebeat.com\u002F2022\u002F01\u002F18\u002Fbrainchip-launches-neuromorphic-process-for-ai-at-the-edge\u002F\">BrainChip推出面向边缘端的神经形态处理器\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>BrainChip今日宣布其Akida神经网络处理器正式商业化。该产品面向各类边缘计算及物联网应用，BrainChip声称自己是首家商业化的神经形态AI芯片生产商，相较于传统方法，可在超低功耗和性能方面带来显著优势。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Deepvision\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fdeepvision.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1ab05a056ec2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"AI Processor Chipmaker Deep Vision Raises $35 Million in Series B Funding\">AI处理器芯片制造商Deep Vision完成3500万美元B轮融资\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Tiger Global领投B轮融资，助力Deep Vision在边缘计算应用中扩展视频分析和自然语言处理能力\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Groq\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>\u003Ca href=\"http:\u002F\u002Fgroq.com\u002F\">Groq\u003C\u002Fa>\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fgroq-demos-fast-llms-on-4-year-old-silicon\u002F\">Groq 在四年前的芯片上演示快速大模型推理\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>加利福尼亚州山景城——Groq 已将其第一代 AI 推理芯片重新定位为语言处理单元（LPU），并展示了 Meta 的 Llama-2 700 亿参数大型语言模型（LLM）以每用户每秒 240 个标记的速度进行推理。Groq 首席执行官乔纳森·罗斯告诉 EE Times，该公司仅用“几天时间”就在其基于公司第一代 AI 芯片的 10 机架（64 芯片）云端开发系统上成功运行了 Llama-2。该系统采用的是 Groq 四年前发布的初代 AI 芯片。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Famyfeldman\u002F2021\u002F04\u002F14\u002Fai-chip-startup-groq-founded-by-ex-googlers-raises-300-million-to-power-autonomous-vehicles-and-data-centers\u002F\">由前谷歌员工创立的 AI 芯片初创公司 Groq 筹集 3 亿美元，用于支持自动驾驶汽车和数据中心\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>乔纳森·罗斯离开谷歌后，于 2016 年创办了下一代半导体初创公司 Groq。如今，这家位于加利福尼亚州山景城的企业宣布，在正式进入公众视野之际，已获得由 Tiger Global Management 和亿万富翁投资者丹·桑德海姆旗下 D1 Capital 领投的 3 亿美元融资。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Kneron\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.kneron.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6fa54b62835b.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Fkneron-to-accelerate-edge-ai-development-with-more-than-10-million-usd-series-a-financing-300556674.html\">Kneron 完成逾 1000 万美元 A 轮融资，加速边缘 AI 发展\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"GTI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.gyrfalcontech.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d08a29c36ae6.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>根据本文所述，\u003Ca href=\"https:\u002F\u002Fwww.prnewswire.com\u002Fnews-releases\u002Fgyrfalcon-offers-automotive-ai-chip-technology-300860069.html\">“Gyrfalcon 提供汽车 AI 芯片技术”\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Gyrfalcon Technology Inc.（GTI）自 2017 年 9 月推出其量产版 AI 加速器芯片以来，一直致力于推广适用于各类 AI 的基于矩阵的应用特定芯片。通过授权其专有技术，该公司有信心能够在 18 个月内帮助汽车制造商将极具竞争力的 AI 芯片投入车辆生产，同时显著提升 AI 性能、改善功耗并带来成本优势。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"SambaNova\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fsambanovasystems.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6f7e319e9b2c.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002Fai\u002Fsambanova-unveils-new-ai-chip-to-power-full-stack-ai-platform\u002F\">SambaNova 发布全新 AI 芯片，为全栈 AI 平台提供动力\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>今日，总部位于帕洛阿尔托的 SambaNova Systems 公布了一款新型 AI 芯片 SN40L，该芯片将为其全栈大型语言模型（LLM）平台 SambaNova Suite 提供算力支持，助力企业实现从芯片到模型的全流程——构建并部署定制化的生成式 AI 模型。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F04\u002F13\u002Fsambanova-raises-676m-at-a-5-1b-valuation-to-double-down-on-cloud-based-ai-software-for-enterprises\u002F\">SambaNova 以 51 亿美元估值完成 6.76 亿美元融资，加码面向企业的云原生 AI 软件\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova 是一家专注于 AI 硬件及其配套集成系统的初创公司，直到去年 12 月才正式结束三年的隐身期。今天，该公司宣布完成一轮巨额融资，进一步拓展其业务版图。公司已敲定 6.76 亿美元的 D 轮融资，联合创始人兼 CEO 罗德里戈·梁确认，公司估值已达 51 亿美元。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsambanova.ai\u002Farticles\u002Fintroducing-sambanova-systems-datascale-a-new-era-of-computing\u002F\">隆重推出 SambaNova Systems DataScale：计算新时代\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova 近几个月来与多家机构紧密合作，已在自然语言处理领域树立了新的行业标杆。这一 NLP 深度学习领域的突破性进展，体现在 SambaNova Systems 数据流优化系统上取得的 GPU 压倒性、打破世界纪录的性能表现中。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsambanova.ai\u002Fa-new-state-of-the-art-in-nlp-beyond-gpus\u002F\">NLP 新标杆：超越 GPU\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SambaNova 近几个月来与多家机构紧密合作，已在自然语言处理领域树立了新的行业标杆。这一 NLP 深度学习领域的突破性进展，体现在 SambaNova Systems 数据流优化系统上取得的 GPU 压倒性、打破世界纪录的性能表现中。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca name=\"GreenWaves\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fgreenwaves-technologies.com\u002Fen\u002Fgreenwaves-technologies-2\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_676acbe60bfe.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.eu\u002Fgreenwaves-shows-off-advanced-audio-demos\u002F\">GreenWaves 展示先进音频演示\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Gap9 处理器是针对物联网设备中计算机视觉应用的 Gap8 后继产品，是一款适用于电池供电设备的超低功耗神经网络处理器。GreenWaves 市场营销副总裁马丁·克鲁姆向 EE Times Europe 表示，公司在从 Gap8 在听觉设备市场获得积极反馈后，决定将 Gap9 的重点转向可穿戴音频设备市场。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Lightelligence\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.lightelligence.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_65b5f4974105.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Foptical-computing-chip-runs-hardest-math-problems-100x-faster-than-gpus\u002F\">光学芯片以比 GPU 快 100 倍的速度解决最复杂的数学问题\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>光学计算初创公司 Lightelligence 展示了一款硅光子学加速器，其运行伊辛问题的速度比典型的 GPU 配置快 100 多倍。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Lightmatter\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.lightmatter.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_232477d0010b.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Flightmatter-raises-more-funding-for-photonic-ai-chip\u002F\">Lightmatter 再次融資，推進光子人工智能芯片發展\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>Lightmatter 是一間源自麻省理工學院的衍生企業，專注於打造採用矽光子計算引擎的人工智能加速器。近日，該公司宣佈完成B輪融資，額外籌得8,000萬美元。其技術基於專有的矽光子學平台，可在晶片內部操控相干光來快速執行運算，同時大幅降低功耗。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Hailo\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.hailotech.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_6364057efc82.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Funicorn-ai-chipmaker-hailo-raises-136-million\u002F\">“獨角獸”級AI晶片廠商 Hailo 筹集1.36億美元\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>以色列AI晶片新創公司 Hailo 在C輪融資中募得1.36億美元，使總融資額達到2.24億美元。據報導，該公司目前已晉升為“獨角獸”企業。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Tachyum\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.tachyum.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_f9512d345867.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002Foff-the-wire\u002Ftachyum-launches-prodigy-universal-processor\u002F\">Tachyum推出Prodigy通用處理器\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>2021年5月11日 — Tachyum今日正式發布全球首款通用處理器Prodigy，該處理器將CPU、GPU和TPU的功能整合於單一晶片之中，構建出均質化架構，並以遠低於競爭對手的成本實現性能的巨大提升。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Alphaics\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.alphaics.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_9ba8d6508cc9.png\" height=\"50\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Falphaics-begins-sampling-its-deep-learning-co-processor\u002F\">AlphaICs開始樣品供應其深度學習協處理器\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>AlphaICs是一家專注於開發面向智慧視覺應用的邊緣AI與學習晶片的新創公司，目前正開始提供其深度學習協處理器Gluon的樣品，該產品同時配備軟體開發套件。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Syntiant\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.syntiant.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_fb1309e0fd55.png\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fsemiengineering.com\u002Fsyntiant-analog-deep-learning-chips\u002F\">Syntiant：類比深度學習晶片\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Syntiant是一家位於加州爾灣的半導體新創公司，由前博通頂尖工程師領軍，團隊兼具創新設計與大規模量產經驗，據公司CEO Kurt Busch表示。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"aiCTX\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Faictx.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7340367511ce.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1333983\">百度投資神經態IC研發商\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>慕尼黑 — 瑞士新創公司 aiCTX 完成來自百度風投的150萬美元A輪前融資，用於開發其低功耗神經態計算與處理器設計的商業應用，並推動所謂「神經態智能」的實現。該公司主要面向低功耗邊緣計算嵌入式感測處理系統。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Flexlogix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.flex-logix.com\u002Fnmax\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5a8f2c488a4c.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fflex-logix-has-two-paths-to-making-a-lot-of-money-challenging-nvidia-in-ai\u002F\">Flex Logix有兩條路徑可賺大錢，在AI領域挑戰NVIDIA\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>這家可程式化晶片公司獲得5,500萬美元的風險投資支持，使其總融資額達到8,200萬美元。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"PFN\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fprojects.preferred.jp\u002Fmn-core\u002Fen\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_1018bf414725.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.preferred-networks.jp\u002Fen\u002Fnews\">Preferred Networks於2020年春季為新型大型叢集MN-3開發自製深度學習處理器MN-Core\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>2018年12月12日，日本東京 — Preferred Networks株式會社（總部位於東京，董事長兼首席執行官西川徹）宣布正在開發專用於深度學習的MN-Core（TM）處理器，並將在東京Big Sight舉辦的SEMICON Japan 2018展覽上獨立展出其自主研發的深度學習硬體，包括MN-Core晶片、主機板及伺服器設備。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Cornami\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fcornami.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_4c30993a20ad.jpg\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fai-startup-cornami-reveals-details-of-neural-net-chip\u002F\">AI新創公司Cornami披露神經網路晶片細節\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>隱形新創公司Cornami於週四公開了其用於運行神經網路的全新晶片設計的一些細節。該公司CTO保羅·馬斯特斯表示，這款晶片將最終實現一種早在20世紀70年代就已出現的技術的最佳優勢。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Anaflash\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fanaflash.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_739f9376db77.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.smart2zero.com\u002Fnews\u002Fai-chip-startup-offers-new-edge-computing-solution\">AI晶片新創公司推出全新邊緣計算解決方案\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Anaflash公司（位于加州圣何塞）是一家初创企业，开发了一款测试芯片，用于演示在与逻辑兼容的嵌入式闪存中进行的模拟神经计算。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Optalysys\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.optalysys.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a049a3d12869.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.globenewswire.com\u002Fnews-release\u002F2019\u002F03\u002F07\u002F1749510\u002F0\u002Fen\u002FOptalysys-launches-world-s-first-commercial-optical-processing-system-the-FT-X-2000.html\">Optalysys推出全球首款商用光学处理系统FT:X 2000\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Optalysys公司研发了光学协处理器技术，该技术能够在大幅降低能耗的同时，提供远超传统计算机的处理能力。其首款协处理器基于成熟的衍射光学方案，利用低功率激光光子代替传统的电力及其电子来实现计算。这种固有的并行技术具有高度可扩展性，代表了计算领域的新范式。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"etacompute\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fetacompute.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_b4186d563ba2.png\" height=\"80\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fspectrum.ieee.org\u002Ftech-talk\u002Fsemiconductors\u002Fprocessors\u002Flowpower-ai-startup-eta-compute-delivers-first-commercial-chips\">低功耗AI初创公司Eta Compute交付首批商用芯片\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>该公司通过采用新的电源管理方案，放弃了风险较高的脉冲神经网络技术。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fspectrum.ieee.org\u002Ftech-talk\u002Fsemiconductors\u002Fprocessors\u002Feta-compute-debuts-spiking-neural-network-chip-for-edge-ai\">Eta Compute推出面向边缘AI的脉冲神经网络芯片\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>该公司表示，该芯片能够自主学习，并且推理功耗仅为100微瓦级别，这一成果是在Arm TechCon上发布的。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Achronix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.achronix.com\u002Fproduct\u002Fspeedster7t\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_eed09a32dac1.png\" height=\"30\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1334717\">Achronix推出用于AI的7nm FPGA\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Achronix凭借全新的高端7nm系列FPGA重返市场，加入了加速深度学习的硅片热潮。该公司旨在利用其创新的AI模块设计、新型片上网络以及GDDR6内存，以低于英特尔和赛灵思等大型竞争对手的成本提供相近的性能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Areanna\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fareanna-ai.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_3c88eb7bd4c8.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fdocument.asp?doc_id=1334947#\">初创公司采用新型SRAM运行AI\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Areanna是深度学习兴起催生的一系列新架构中的最新案例。这种全新的计算方式激发了业内工程师们的想象力，他们渴望成为下一个惠普式的成功典范。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Neuroblade\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.neuroblade.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7f150f5c3a57.png\" height=\"120\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetasia.com\u002Fnews\u002Farticle\u002FNeuroBlade-Preps-Inference-Chip\">NeuroBlade准备推出推理芯片\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>NeuroBlade加入了数十家致力于AI芯片研发的初创企业行列。这家以色列公司刚刚完成了由Check Point Software创始人领投、英特尔资本参与的2300万美元A轮融资。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Luminous\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.luminouscomputing.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_7befc65c9954.png\" height=\"90\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.technologyreview.com\u002Fs\u002F613668\u002Fai-chips-uses-optical-semiconductor-machine-learning\u002F\">比尔·盖茨刚刚投资了一家利用光技术加速AI的芯片初创公司\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>Luminous Computing开发了一种光学微芯片，能够在消耗更少能量的情况下，以远超其他半导体的速度运行AI模型。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Efinix\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.efinixinc.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_31ff09e90246.png\" height=\"25\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.zdnet.com\u002Farticle\u002Fchip-startup-efinix-hopes-to-bootstrap-ai-efforts-in-iot\u002F\">芯片初创公司Efinix希望在物联网领域推动AI应用\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>成立六年的Efinix公司在英特尔和赛灵思主导的FPGA技术基础上进行了巧妙创新；该公司希望其节能型芯片能够为物联网领域的嵌入式AI市场注入新的活力。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"AIstorm\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Faistorm.ai\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ec0742a88205.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2019\u002F02\u002F11\u002Faistorm-raises-13-2-million-for-ai-edge-computing-chips\u002F\">AIStorm筹集1320万美元用于AI边缘计算芯片\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>曾任职于Maxim、Micrel和Semtech的高级管理人员David Schie认为这两个市场都已具备颠覆性变革的条件。他与WSI、东芝和Arm的资深人士Robert Barker、Andreas Sibrai和Cesar Matias共同于2011年创立了位于圣何塞的人工智能初创公司AIStorm，该公司致力于开发能够直接处理来自可穿戴设备、手机、汽车设备、智能音箱及其他物联网设备数据的芯片组。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cp>\u003Ca name=\"SiMa\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fsima.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a4291b4e6218.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.businesswire.com\u002Fnews\u002Fhome\u002F20200512005313\u002Fen\u002FSiMa.ai-Raises-30-Million-Series-Investment-Led\">SiMa.ai 完成由 Dell Technologies Capital 领投的 3000 万美元 A 轮融资\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>加州圣何塞 — (美国商业资讯) — 致力于推动高性能机器学习绿色发展的 SiMa.ai 今日宣布推出其机器学习 SoC (MLSoC) 平台，这是业界首个统一的解决方案，能够在支持传统计算的同时，实现高性能、低功耗、安全可靠的机器学习推理。SiMa.ai 的 MLSoC 每瓦特可提供最高的每秒帧数，成为首个在 ResNet-50 上突破每瓦 1000 帧\u002F秒大关的机器学习平台。在与客户的合作中，该公司通过其自动化软件流程，在广泛的嵌入式边缘应用领域，相比当前的竞品方案，将每瓦帧率提升了 10 至 30 倍。该平台将提供从 5W 功耗下 50 TOPs 到 20W 功耗下 200 TOPs 不等的机器学习解决方案，并首次在业内实现了高性能推理下每瓦 10 TOPs 的性能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.businesswire.com\u002Fnews\u002Fhome\u002F20191022005079\u002Fen\u002FSiMa.ai%E2%84%A2-Introduces-MLSoC%E2%84%A2\">SiMa.ai™ 推出 MLSoC™ — 首个突破每瓦 1000 帧\u002F秒大关、较替代方案提升 10–30 倍的机器学习平台\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>SiMa.ai 是一家致力于让高性能机器学习走向绿色的公司，今日宣布推出其机器学习 SoC (MLSoC) 平台——这是业界首个能够以高性能、最低功耗、安全可靠的方式支持传统计算的统一机器学习推理解决方案。SiMa.ai 的 MLSoC 每瓦特可提供最高的每秒帧数，成为首个在 ResNet-50 上突破每瓦 1000 帧\u002F秒大关的机器学习平台。在与客户的合作中，该公司通过其自动化软件流程，在广泛的嵌入式边缘应用领域，相比当前的竞品方案，将每瓦帧率提升了 10 至 30 倍。该平台将提供从 5W 功耗下 50 TOPs 到 20W 功耗下 200 TOPs 的机器学习解决方案，首次在业内实现了高性能推理下每瓦 10 TOPs 的性能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Untether\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Funtether.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_998f80da7af2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2021\u002F07\u002F20\u002Funtether-ai-nabs-125m-for-ai-acceleration-chips\u002F\">Untether AI 获得 1.25 亿美元用于开发 AI 加速芯片\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>专注于为 AI 推理工作负载开发定制化芯片的初创公司 Untether AI 今日宣布，已从 Tracker Capital Management 和 Intel Capital 处筹集到 1.25 亿美元。本轮融资超额认购，加拿大养老金计划投资委员会和 Radical Ventures 也参与其中，资金将用于支持客户拓展。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\n\u003Cp>\u003Ca name=\"GrAI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.graimatterlabs.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_deab494a1d0f.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2019\u002F09\u002F18\u002Fgrai-matter-labs-reveals-neuronflow-technology-and-announces-graiflow-sdk\u002F\">GrAI Matter Labs 发布 NeuronFlow 技术并宣布 GrAIFlow SDK\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\n\u003Cblockquote>\n  \u003Cp>神经形态计算领域的先驱 GrAI Matter Labs（简称 GML）今日发布了全新的可编程处理器技术 NeuronFlow，并宣布启动其 GrAIFlow 软件开发工具包的早期访问计划。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Rain\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Frain-neuromorphics.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_aaf566b782d2.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.crunchbase.com\u002Forganization\u002Frain-neuromorphics\">Rain Neuromorphics 在 Crunchbase 上的信息\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>我们打造受大脑启发的人工智能处理器。我们的使命是实现类脑规模的智能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"ABR\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fappliedbrainresearch.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ea38b40deabe.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.crunchbase.com\u002Forganization\u002Fapplied-brain-research\">Applied Brain Research 在 Crunchbase 上的信息\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>ABR 开发全球最先进的神经形态编译器、运行时及库，服务于新兴的神经形态计算领域。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Xmos\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.xmos.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_515efbca5ebb.png\" height=\"40\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fxmos-adapts-xcore-into-aiot-crossover-processor\u002F\">XMOS 将 Xcore 改造成 AIoT“跨界处理器”\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>EE Times 独家报道！这款新芯片面向物联网设备中的 AI 驱动语音交互——“终端侧最重要的 AI 工作负载”。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2020\u002F02\u002F12\u002Fxmos-unveils-xcore-ai-a-powerful-chip-designed-for-ai-processing-at-the-edge\u002F\">XMOS 发布 Xcore.ai，一款专为边缘端 AI 处理设计的强大芯片\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>最新的 xcore.ai 是一款跨界芯片，旨在在一个设备中同时实现高性能 AI、数字信号处理、控制以及输入输出功能，售价仅从 1 美元起。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"DinoplusAI\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fdinoplus.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_70e58ebe8f38.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>我们设计并生产用于数据中心运行的 AI 处理器及其配套软件。我们的独特方法以推理优化为核心，注重性能、能效和易用性；同时，这一方法也使训练更具成本效益。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Furiosa\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.furiosa.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ea697849018f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>我们制造高性能 AI 推理协处理器，可无缝集成到各类计算平台中，包括数据中心、服务器、桌面电脑、汽车和机器人。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Corerain\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.corerain.com\u002Fen\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_365851b13dd9.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cblockquote>\n  \u003Cp>Corerain 提供超高性能的 AI 加速芯片以及全球首个基于流式引擎的 AI 开发平台。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Perceive\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fperceive.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_609b60445d34.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fventurebeat.com\u002F2020\u002F03\u002F31\u002Fperceive-emerges-from-stealth-with-ergo-edge-ai-chip\u002F\">Perceive 携 Ergo 边缘 AI 芯片悄然问世\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>专注于设备端计算解决方案的初创公司 Perceive 今日正式公开亮相，推出其首款产品——Ergo 边缘推理处理器。首席执行官 Steve Teig 声称，这款专为安防摄像头、智能家电和智能手机等消费类设备设计的芯片，在同类产品中实现了“突破性”的精度和性能。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"SimpleMachines\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.simplemachines.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_f701d005198f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.design-reuse.com\u002Fnews\u002F49012\u002Fsimplemachines-ai-chip-tsmc-16nm.html\">SimpleMachines, Inc. 推出首款高性能芯片\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>在传统芯片制造商努力应对快速发展的 AI 软件生态带来的挑战之际，一家位于圣何塞的初创公司宣布已成功实现硅基原型，并提出了一种全新的、面向未来的芯片架构来解决这些问题。\n\nSimpleMachines, Inc.（SMI）团队汇聚了来自高通、英特尔和 Sun Microsystems 等公司的顶尖研究科学家与行业领袖，打造了首款易于编程的高性能芯片，可加速各类 AI 和机器学习应用。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Neureality\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.neureality.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_87779cc94152.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2022\u002F12\u002F06\u002Fneureality-ai-accelerator-chips-startup-raises-35m\u002F\">NeuReality 获得 3500 万美元融资，推进 AI 加速芯片上市\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>致力于开发 AI 推理加速芯片的初创公司 NeuReality 已完成 3500 万美元的新一轮风险投资。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.electronicsmedia.info\u002F2021\u002F05\u002F06\u002Fneureality-unveiled-nr1-p-a-novel-ai-centric-inference-platform\u002F\">NeuReality 发布 NR1-P：一款创新的以 AI 为中心的推理平台\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>NeuReality 正式发布了 NR1-P，这是一款以 AI 为中心的新型推理平台。该公司已开始向客户和合作伙伴展示其以 AI 为中心的平台。通过开发一种基于新型系统级芯片（SoC）的 AI 中心型推理平台，NeuReality 重新定义了当前过时的 AI 系统架构。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F02\u002F10\u002Fneureality-raises-8m-for-its-novel-ai-inferencing-platform\u002F\">NeuReality 获得 800 万美元融资，用于其创新的 AI 推理平台\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>以色列 AI 硬件初创公司 NeuReality 致力于通过摒弃当前以 CPU 为中心的模式来革新 AI 推理平台。该公司今日正式走出隐身状态，并宣布完成 800 万美元的种子轮融资。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n \n\u003Cp>\u003Ca name=\"AnalogInference\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.analog-inference.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_ca545698901d.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eenewsanalog.com\u002Fnews\u002Fanalog-inference-startup-raises-106-million\">模拟推理初创公司获 1060 万美元融资\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>该公司由 Khosla Ventures 投资支持，正开发用于边缘 AI 计算的第一代产品。公司在 2018 年 3 月成立后不久便筹集了 450 万美元，此次最新一轮融资使累计融资总额达到 1510 万美元。\u003C\u002Fp>\n\u003C\u002Fblockquote>\n\n\u003Cp>\u003Ca name=\"Quadric\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.quadric.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d51fda59e18f.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.hpcwire.com\u002Foff-the-wire\u002Fquadric-announces-unified-silicon-and-software-platform-optimized-for-on-device-ai\u002F\">Quadric 宣布推出面向设备端 AI 的统一硅硬件与软件平台\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>美国加利福尼亚州伯林盖姆，2021 年 6 月 22 日——专注于高性能边缘计算的创新企业 Quadric（quadric.io）推出了一款统一的硅硬件与软件平台，释放设备端 AI 的强大潜力。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca name=\"EdgeQ\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fedgeq.io\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_2f6ae7c349fb.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Ftechcrunch.com\u002F2021\u002F01\u002F26\u002Fedgeq-reveals-more-details-behind-its-next-gen-5g-ai-chip\u002F\">EdgeQ 公开下一代 5G\u002FAI 芯片更多细节\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>5G 是当前无线通信技术的一场革命，无论是老牌还是新兴的芯片公司都在争相进入这一竞争激烈但利润丰厚的市场。其中最引人注目的新晋玩家之一便是 EdgeQ，这是一家拥有深厚技术背景、曾与高通合作过的初创公司。我们去年曾报道过该公司完成近 4000 万美元 A 轮融资的消息。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Innatera\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.innatera.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_a92a6c2fb5be.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Finnatera-unveils-neuromorphic-ai-chip-to-accelerate-spiking-networks\u002F\">Innatera发布神经形态AI芯片，加速脉冲网络运算\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>荷兰初创公司Innatera致力于为脉冲神经网络打造神经形态AI加速器，现已生产出首批芯片，对其性能进行了评估，并公布了其架构细节。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Ceremorphic\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fceremorphic.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_19670e43a1c2.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.eetimes.com\u002Fredpine-founder-launches-ai-processor-startup\u002F\">Redpine创始人创办AI处理器初创公司\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>本周低调亮相的AI芯片初创公司Ceremorphic正准备推出一款异构AI处理器，目标应用包括数据中心模型训练、汽车、高性能计算、机器人技术以及其他新兴领域。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Aspinity\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.aspinity.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_5efe07be3c22.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fembeddedcomputing.com\u002Ftechnology\u002Fanalog-and-power\u002Fanalog-semicundoctors-sensors\u002Faspinity-analog-ml-chip-allows-battery-powered-always-on\">Aspinity模拟ML芯片实现电池供电的“始终开启”功能\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>机器学习（ML）不就是需要海量的计算、数字信号处理等吗？或许并非如此，Aspinity团队认为。该公司持续在模拟技术领域发力。其最新推出的analogML系列产品AML100完全在模拟域运行。因此，它能够将系统“始终开启”状态下的功耗降低95%（说实话，我得反复确认了好几遍才相信这一点）。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"Teramem\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.tetramem.com\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_49d99193ce43.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\u003Cp>\u003Cstrong>\u003Ca href=\"https:\u002F\u002Fwww.tetramem.com\u002Fposts\u002FTetraMem-Technology-Debut-at-Linley\">TetraMem在2022年林利春季处理器大会上首次公开亮相其模拟存内计算技术，备受瞩目。\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cp>\u003Ca name=\"d-matrix\">\u003C\u002Fa>\u003C\u002Fp>\n\u003Cdiv align=\"center\">\u003Ca href=\"https:\u002F\u002Fwww.d-matrix.ai\u002F\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_readme_d54084c9c0b1.png\" height=\"60\">\u003C\u002Fa>\u003C\u002Fdiv>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Cstrong>\u003Ca href=\"www.reuters.com\u002Ftechnology\u002Fai-chip-startup-d-matrix-raises-110-mln-with-backing-microsoft-2023-09-06\u002F\">独家：AI芯片初创公司d-Matrix获微软支持，融资1.1亿美元\u003C\u002Fa>\u003C\u002Fstrong>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>9月6日（路透社）——总部位于硅谷的人工智能芯片初创公司d-Matrix已从包括微软公司（MSFT.O）在内的投资者那里筹集到1.1亿美元，而此时许多芯片公司正苦于融资困难。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n\n\u003Cp>\u003Ca href=\"https:\u002F\u002Fwww.forbes.com\u002Fsites\u002Fkarlfreund\u002F2022\u002F06\u002F21\u002Fd-matrix-ai-chip-promises-efficient-transformer-processing\u002F\">d-Matrix AI芯片承诺高效处理Transformer模型\u003C\u002Fa>\u003C\u002Fp>\n\u003Cblockquote>\n  \u003Cp>这家初创公司结合了数字存内计算和小芯片技术，用于数据中心级别的推理任务。\u003C\u002Fp>\n\u003C\u002Fblockquote> \n \n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"AIChipCompilers\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>AI芯片编译器\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n1. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fpytorch\u002Fglow\">pytorch\u002Fglow\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"https:\u002F\u002Ftvm.ai\u002F\">TVM：端到端深度学习编译器栈\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"https:\u002F\u002Fwww.tensorflow.org\u002Fxla\">谷歌TensorFlow XLA\u003C\u002Fa>\u003Cbr>\n4. \u003Ca href=\"https:\u002F\u002Fdeveloper.nvidia.com\u002Ftensorrt\">英伟达TensorRT\u003C\u002Fa>\u003Cbr>\n5. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fplaidml\u002Fplaidml\">PlaidML\u003C\u002Fa>\u003Cbr>\n6. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fngraph\">nGraph\u003C\u002Fa>\u003Cbr>\n7. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FTiramisu-Compiler\u002Ftiramisu\">MIT Tiramisu编译器\u003C\u002Fa>\u003Cbr>\n8. \u003Ca href=\"https:\u002F\u002Fonnc.ai\u002F\">ONNC（开放神经网络编译器）\u003C\u002Fa>\u003Cbr>\n9. \u003Ca href=\"https:\u002F\u002Fmlir.llvm.org\u002F\">MLIR：多级中间表示\u003C\u002Fa>\u003Cbr>\n10. \u003Ca href=\"http:\u002F\u002Ftensor-compiler.org\u002F\">张量代数编译器（taco）\u003C\u002Fa>\u003Cbr>\n11. \u003Ca href=\"https:\u002F\u002Ffacebookresearch.github.io\u002FTensorComprehensions\u002F\">Tensor Comprehensions\u003C\u002Fa>\u003Cbr>\n12. \u003Ca href=\"https:\u002F\u002Fwww.polymagelabs.com\u002F\u002F\">PolyMage Labs\u003C\u002Fa>\u003Cbr>\n13. \u003Ca href=\"https:\u002F\u002Foctoml.ai\u002F\">OctoML\u003C\u002Fa>\u003Cbr>\n14. \u003Ca href=\"https:\u002F\u002Fwww.modular.com\u002F\">Modular AI\u003C\u002Fa>\u003Cbr>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"AIChipBenchmarks\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>AI芯片基准测试\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n\n1. \u003Ca href=\"https:\u002F\u002Fdawn.cs.stanford.edu\u002Fbenchmark\u002Findex.html\">DAWNBench：端到端深度学习基准与竞赛——图像分类（ImageNet）\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Frdadolf\u002Ffathom\">Fathom：现代深度学习方法的参考工作负载\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"https:\u002F\u002Fmlperf.org\u002F\">MLPerf：广泛的ML基准测试套件，用于衡量ML软件框架、硬件加速器及云平台的性能\u003C\u002Fa>。\n\u003Cstrong>您可在此处查看最新的MLPerf结果：训练2.1、HPC 2.0、推理tiny 1.0。\u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-training-4q2022\u002F\">点击这里。\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n\u003Cstrong>您可在此处查看MLPerf推理结果v2.1。\u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-inference-v21\u002F\">点击这里。\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n\u003Cstrong>您可在此处查看MLPerf训练结果v1.0。\u003Ca href=\"https:\u002F\u002Fmlcommons.org\u002Fen\u002Fnews\u002Fmlperf-training-2q2022\u002F\">点击这里。\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n\n4. \u003Ca href=\"https:\u002F\u002Faimatrix.ai\u002Fen-us\u002Findex.html\">AI Matrix\u003C\u002Fa>\u003Cbr>\n5. \u003Ca href=\"http:\u002F\u002Fai-benchmark.com\u002Findex.html\">AI-Benchmark\u003C\u002Fa>\u003Cbr>\n6. \u003Ca href=\"https:\u002F\u002Fgithub.com\u002FAIIABenchmark\u002FAIIA-DNN-benchmark\">AIIABenchmark\u003C\u002Fa>\u003Cbr>\n7. \u003Ca href=\"https:\u002F\u002Fwww.eembc.org\u002Fmlmark\u002F\">EEMBC MLMark基准测试\u003C\u002Fa>\u003Cbr>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n\n\u003Cp>\u003Ca name=\"Reference\">\u003C\u002Fa>\u003C\u002Fp>\n\n\u003Cdiv align=\"center\">\u003Ch2>参考资料\u003C\u002Fh2>\u003C\u002Fdiv>\n\n\u003Cp>\u003CHR>\n\n\u003Cdiv align=\"center\">\u003Ch3> \u003C\u002Fh3>\u003C\u002Fdiv>\n      \n1. \u003Ca href=\"https:\u002F\u002Fmeanderful.blogspot.jp\u002F2017\u002F06\u002Ffpgas-and-ai-processors-dnn-and-cnn-for.html\">FPGA与AI处理器：面向一切的DNN和CNN\u003C\u002Fa>\u003Cbr>\n2. \u003Ca href=\"http:\u002F\u002Fwww.nanalyze.com\u002F2017\u002F05\u002F12-ai-hardware-startups-new-ai-chips\u002F\">12家打造新型AI芯片的硬件初创公司\u003C\u002Fa>\u003Cbr>\n3. \u003Ca href=\"http:\u002F\u002Feyeriss.mit.edu\u002Ftutorial.html\">深度神经网络硬件架构教程\u003C\u002Fa>\u003Cbr>\n4. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fnicsefc.ee.tsinghua.edu.cn\u002Fprojects\u002Fneural-network-accelerator\u002F\">神经网络加速器对比\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n5. 《2018年AI芯片技术白皮书》。您可从\u003Ca href=\"https:\u002F\u002Fcloud.tsinghua.edu.cn\u002Ff\u002F9aa0a4f0a5684cc48495\u002F?dl=1\">这里\u003C\u002Fa>下载，或通过\u003Ca href=\"https:\u002F\u002Fdrive.google.com\u002Fopen?id=1ieDm0bpjVWl5MnSESRs92EcmoSzG5vcm\">Google云端硬盘\u003C\u002Fa>获取。\u003Cbr>\n5. \u003Cstrong>“当我们谈论AI芯片时，我们在谈论什么”。 \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FSbX5yz5d3GXaLcl15DO6OQ\">#1\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FzvgDgKpIMIRLFUEW0fFOeg\">#2\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FCKHs5yblcMur4h2BwUBICw\">#3\u003C\u002Fa>,  \u003Ca href=\"https:\u002F\u002Fmp.weixin.qq.com\u002Fs\u002FhFnHhaWWYTFRUsD3HlMbLw\">#4\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n6. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fbirenresearch.github.io\u002FAIChip_Paper_List\u002F\">AI芯片论文列表\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n7. \u003Cstrong>\u003Ca href=\"https:\u002F\u002Fkhairy2011.medium.com\u002Ftpu-vs-gpu-vs-cerebras-vs-graphcore-a-fair-comparison-between-ml-hardware-3f5a19d89e38\">TPU vs GPU vs Cerebras vs Graphcore：ML硬件的公平对比\u003C\u002Fa>\u003C\u002Fstrong>\u003Cbr>\n\n\u003Cdiv align=\"center\">\n\u003Ca href=\"http:\u002F\u002Fwww.reliablecounter.com\" target=\"_blank\">\u003Cimg src=\"http:\u002F\u002Fwww.reliablecounter.com\u002Fcount.php?page=https:\u002F\u002Fbasicmi.github.io\u002FDeep-Learning-Processor-List\u002F&digit=style\u002Fplain\u002F3\u002F&reloads=1\" alt=\"笔记本电脑\" title=\"笔记本电脑\" border=\"0\">\u003C\u002Fa>\n\u003C\u002Fdiv>","# AI-Chip 快速上手指南\n\n**注意**：`AI-Chip` 并非一个需要编译安装的可执行软件或代码库，而是一个由社区维护的**人工智能芯片（ICs 和 IPs）全景列表与资源索引**。它主要托管在 GitHub 上，以 Markdown 文档形式呈现，旨在为开发者、研究人员和行业从业者提供最新的芯片厂商、初创公司动态及技术资讯。\n\n因此，本指南将指导您如何**访问、浏览及利用**该资源，而非传统的软件安装流程。\n\n## 1. 环境准备\n\n由于该项目是静态文档列表，无需特定的操作系统或复杂的依赖环境。您只需要具备以下条件即可开始使用：\n\n*   **网络环境**：能够访问 GitHub (`github.com`)。\n    *   *国内用户建议*：若访问 GitHub 原生地址较慢，建议使用国内镜像站（如 `gitee.com` 搜索同名项目）或配置加速代理。\n*   **浏览工具**：任意现代网页浏览器（Chrome, Edge, Firefox 等）或支持 Markdown 预览的代码编辑器（VS Code, IntelliJ IDEA 等）。\n*   **前置知识**：对 AI 硬件生态（如 GPU, NPU, ASIC, FPGA 等）有基本了解，以便更好地筛选信息。\n\n## 2. 获取与访问步骤\n\n您可以通过以下两种方式查看该资源：\n\n### 方式一：在线直接浏览（推荐）\n直接访问 GitHub 仓库页面，这是获取最新更新最快的方式。\n\n1.  打开浏览器。\n2.  访问项目主页：\n    ```text\n    https:\u002F\u002Fgithub.com\u002Fbasicmi\u002FDeep-Learning-Processor-List\n    ```\n3.  向下滚动阅读 `README.md` 内容，或使用页面内的目录跳转（Shortcut）。\n\n### 方式二：克隆到本地（适合离线查阅或二次编辑）\n如果您希望将列表保存到本地进行检索或贡献内容，请使用 Git 工具。\n\n1.  打开终端（Terminal 或 CMD）。\n2.  执行克隆命令：\n    ```bash\n    git clone https:\u002F\u002Fgithub.com\u002Fbasicmi\u002FDeep-Learning-Processor-List.git\n    ```\n    *国内加速方案（如果上述命令超时）：*\n    ```bash\n    git clone https:\u002F\u002Fgitee.com\u002Fmirrors\u002FDeep-Learning-Processor-List.git\n    ```\n    *(注：若 Gitee 无同步镜像，请尝试配置 git proxy 或使用上述在线浏览方式)*\n\n3.  进入目录：\n    ```bash\n    cd Deep-Learning-Processor-List\n    ```\n\n4.  使用本地 Markdown 编辑器打开 `README.md` 文件查看。\n\n## 3. 基本使用指南\n\n该项目的核心在于其结构化的分类索引。以下是高效使用该列表的方法：\n\n### 3.1 快速定位芯片厂商\n在项目首页的 **Shortcut** 表格中，您可以直接点击链接跳转到特定类别的厂商列表：\n\n*   **IC Vendors (传统芯片巨头)**: 包含 Intel, Nvidia, Qualcomm, Samsung, AMD, IBM 等。\n    *   *用途*: 查询主流商用芯片（如 NVIDIA Hopper, Intel Habana Gaudi2）的最新架构新闻。\n*   **Tech Giants & HPC Vendors (科技巨头与高性能计算)**: 包含 Google (TPU), Amazon AWS, Microsoft, Apple, Alibaba, Tencent, Baidu, Tesla (Dojo) 等。\n    *   *用途*: 了解云厂商自研芯片及大模型专用硬件进展。\n*   **IP Vendors (知识产权供应商)**: 包含 ARM, Synopsys, Cadence, CEVA 等。\n    *   *用途*: 寻找可集成的 NPU IP 核用于 SoC 设计。\n*   **Startups (全球初创企业)**: 包含 Cerebras, Graphcore, Groq, SambaNova, Lightelligence 等数十家新兴公司。\n    *   *用途*: 探索前沿架构（如存内计算、光计算、类脑计算）和创新解决方案。\n\n### 3.2 查询具体技术细节\n点击具体的公司名称（例如 `Nvidia` 或 `Groq`），您将看到该厂商的最新动态摘要，通常包含：\n*   最新产品发布新闻链接。\n*   核心架构特点（如 \"7nm FinFET\", \"1 million neurons\"）。\n*   性能基准测试引用（如 MLPerf 结果）。\n\n**示例：查询 NVIDIA 最新架构**\n1.  在页面找到 **I. IC Vendors** 章节。\n2.  点击 `Nvidia` 锚点。\n3.  阅读关于 `NVIDIA Hopper Architecture` 或 `Grace CPU` 的简介及官方博客链接。\n\n### 3.3 追踪最新动态\n关注文档顶部的 **Latest updates** 部分，这里按时间倒序列出了最近添加的新闻和公司（如 SambaNova, d-Matrix, Neureality 等），帮助您快速捕捉行业最新风向。\n\n### 3.4 进阶：参与贡献\n如果您发现新的 AI 芯片公司或最新资讯，可以通过以下方式更新列表：\n1.  Fork 该仓库。\n2.  编辑 `README.md` 文件，按照现有格式添加新条目。\n3.  提交 Pull Request (PR) 至原仓库。","某边缘计算初创公司的硬件选型团队正急需为新一代智能安防摄像头挑选一款兼具低功耗与高算力的 AI 加速芯片，以在三个月内完成原型机开发。\n\n### 没有 AI-Chip 时\n- **信息搜集效率极低**：工程师需手动在数十个厂商官网、新闻稿和技术论坛中碎片化搜索，耗时数天仍难以穷尽所有潜在供应商。\n- **关键参数对比困难**：不同厂商的算力单位（TOPS vs GFLOPS）、制程工艺和功耗数据格式不一，缺乏统一视图导致横向评估极易出错。\n- **错失新兴技术机会**：由于精力局限于 NVIDIA、Intel 等头部大厂，容易忽略如 Groq、Tenstorrent 或 d-Matrix 等具有独特架构优势的初创公司方案。\n- **基准测试数据滞后**：难以快速定位最新的 MLPerf 测试结果，导致选型依据停留在过时的性能数据上，增加项目落地风险。\n\n### 使用 AI-Chip 后\n- **一站式全景扫描**：利用 AI-Chip 整理的全球 IC 与 IP 清单，团队在几小时内即可浏览从云端到边缘端的全产业链玩家，大幅缩短调研周期。\n- **标准化决策支持**：借助工具中分类清晰的架构类型与应用场景标签，工程师能快速筛选出符合“低功耗边缘推理”需求的候选名单并进行精准对比。\n- **发现差异化优势**：通过关注工具更新的初创企业板块，团队发现了针对特定视觉模型优化的新型芯片，成功构建了更具成本竞争力的技术方案。\n- **实时追踪前沿动态**：直接查阅工具链接的最新 MLPerf 基准测试与各厂商（如 Cerebras、SambaNova）的更新日志，确保选型基于当前最权威的性能数据。\n\nAI-Chip 将原本杂乱无章的芯片情报转化为结构化的决策资产，帮助研发团队在激烈的市场竞争中快速锁定最优硬件方案。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fbasicmi_AI-Chip_83ff08c0.png","basicmi","Shan Tang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fbasicmi_d1fb4904.jpg","I have been working in IC industry since 2000. From mid-2016, I started working on IC for Deep Learning.",null,"Beijing","shan.tang.g@gmail.com","https:\u002F\u002Fgithub.com\u002Fbasicmi",[81],{"name":82,"color":83,"percentage":84},"PHP","#4F5D95",100,1704,278,"2026-04-09T18:45:23",1,"","未说明",{"notes":92,"python":90,"dependencies":93},"该仓库并非可运行的 AI 软件工具，而是一个关于人工智能芯片（ICs 和 IPs）的行业资讯、厂商列表和技术图谱的汇总清单。它不包含源代码、安装脚本或运行环境需求，主要功能是提供指向各大芯片厂商（如 Nvidia, Intel, Google 等）及初创公司新闻和文档的外部链接。",[],[14],[96,97,98,99,100],"chip","ai-chips","machine-learning","deep-learning","processor","2026-03-27T02:49:30.150509","2026-04-14T15:24:44.060234",[],[]]