[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"tool-guan-yuan--Awesome-AutoML-and-Lightweight-Models":3,"similar-guan-yuan--Awesome-AutoML-and-Lightweight-Models":60},{"id":4,"github_repo":5,"name":6,"description_en":7,"description_zh":8,"ai_summary_zh":8,"readme_en":9,"readme_zh":10,"quickstart_zh":11,"use_case_zh":12,"hero_image_url":13,"owner_login":14,"owner_name":15,"owner_avatar_url":16,"owner_bio":17,"owner_company":18,"owner_location":19,"owner_email":20,"owner_twitter":18,"owner_website":18,"owner_url":21,"languages":18,"stars":22,"forks":23,"last_commit_at":24,"license":18,"difficulty_score":25,"env_os":26,"env_gpu":27,"env_ram":26,"env_deps":28,"category_tags":36,"github_topics":38,"view_count":54,"oss_zip_url":18,"oss_zip_packed_at":18,"status":55,"created_at":56,"updated_at":57,"faqs":58,"releases":59},5776,"guan-yuan\u002FAwesome-AutoML-and-Lightweight-Models","Awesome-AutoML-and-Lightweight-Models","A list of high-quality (newest) AutoML works and lightweight models including 1.) Neural Architecture Search, 2.) Lightweight Structures, 3.) Model Compression, Quantization and Acceleration, 4.) Hyperparameter Optimization, 5.) Automated Feature Engineering.","Awesome-AutoML-and-Lightweight-Models 是一个专注于自动化机器学习（AutoML）与轻量级模型的高质量开源资源库。它系统性地整理了该领域最新的研究成果，涵盖神经架构搜索（NAS）、轻量化网络结构、模型压缩与量化加速、超参数优化以及自动特征工程等五大核心方向。\n\n在深度学习模型日益庞大、部署难度增加的背景下，该项目致力于解决如何高效设计高性能且低资源消耗的模型难题。通过汇集包括 DARTS、ProxylessNAS、FBNet 等经典论文及其对应的代码实现，它为研究者提供了从理论到实践的一站式参考，帮助快速复现前沿算法或寻找创新灵感。\n\n这份资源特别适合人工智能领域的研究人员、算法工程师及开发者使用。无论是希望深入探索自动架构搜索机制的学者，还是需要在移动端或嵌入式设备上部署高效模型的工程团队，都能从中获益。其独特亮点在于不仅罗列文献，更强调“最新”与“高质量”，并明确标注了论文发表的顶级会议（如 CVPR、ICLR、NIPS）及开源代码链接，极大降低了技术调研与入门的门槛，是连接学术前沿与工业落地的实用桥梁。","# awesome-AutoML-and-Lightweight-Models\nA list of high-quality (newest) AutoML works and lightweight models including **1.) Neural Architecture Search**, **2.) Lightweight Structures**, **3.) Model Compression, Quantization and Acceleration**, **4.) Hyperparameter Optimization**, **5.) Automated Feature Engineering**.\n\nThis repo is aimed to provide the info for AutoML research (especially for the lightweight models). Welcome to PR the works (papers, repositories) that are missed by the repo.\n\n## 1.) Neural Architecture Search\n### **[Papers]**   \n**Gradient:**\n- [When NAS Meets Robustness: In Search of Robust Architectures against Adversarial Attacks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10695) | [**CVPR 2020**]\n  + [gmh14\u002FRobNets](https:\u002F\u002Fgithub.com\u002Fgmh14\u002FRobNets) | [Pytorch]\n\n- [Searching for A Robust Neural Architecture in Four GPU Hours](https:\u002F\u002Fxuanyidong.com\u002Fpublication\u002Fcvpr-2019-gradient-based-diff-sampler\u002F) | [**CVPR 2019**]\n  + [D-X-Y\u002FGDAS](https:\u002F\u002Fgithub.com\u002FD-X-Y\u002FGDAS) | [Pytorch]\n\n- [ASAP: Architecture Search, Anneal and Prune](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04123) | [2019\u002F04]\n\n- [Single-Path NAS: Designing Hardware-Efficient ConvNets in less than 4 Hours](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02877#) | [2019\u002F04]\n  + [dstamoulis\u002Fsingle-path-nas](https:\u002F\u002Fgithub.com\u002Fdstamoulis\u002Fsingle-path-nas) | [Tensorflow]\n\n- [Automatic Convolutional Neural Architecture Search for Image Classification Under Different Scenes](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8676019) | [**IEEE Access 2019**]\n\n- [sharpDARTS: Faster and More Accurate Differentiable Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.09900) | [2019\u002F03]\n\n- [Learning Implicitly Recurrent CNNs Through Parameter Sharing](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09701) | [**ICLR 2019**]\n  + [lolemacs\u002Fsoft-sharing](https:\u002F\u002Fgithub.com\u002Flolemacs\u002Fsoft-sharing) | [Pytorch]\n\n- [Probabilistic Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.05116) | [2019\u002F02]\n\n- [Auto-DeepLab: Hierarchical Neural Architecture Search for Semantic Image Segmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02985) | [2019\u002F01]\n\n- [SNAS: Stochastic Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.09926) | [**ICLR 2019**]\n\n- [FBNet: Hardware-Aware Efficient ConvNet Design via Differentiable Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03443) | [2018\u002F12]\n\n- [Neural Architecture Optimization](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8007-neural-architecture-optimization) | [**NIPS 2018**]\n  + [renqianluo\u002FNAO](https:\u002F\u002Fgithub.com\u002Frenqianluo\u002FNAO) | [Tensorflow]\n\n- [DARTS: Differentiable Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.09055) | [2018\u002F06]\n  + [quark0\u002Fdarts](https:\u002F\u002Fgithub.com\u002Fquark0\u002Fdarts) | [Pytorch]\n  + [khanrc\u002Fpt.darts](https:\u002F\u002Fgithub.com\u002Fkhanrc\u002Fpt.darts) | [Pytorch]\n  + [dragen1860\u002FDARTS-PyTorch](https:\u002F\u002Fgithub.com\u002Fdragen1860\u002FDARTS-PyTorch) | [Pytorch]\n\n**Reinforcement Learning:**  \n- [Template-Based Automatic Search of Compact Semantic Segmentation Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02365) | [2019\u002F04]\n\n- [Understanding Neural Architecture Search Techniques](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00438) | [2019\u002F03]\n\n- [Fast, Accurate and Lightweight Super-Resolution with Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.07261) | [2019\u002F01]\n  + [falsr\u002FFALSR](https:\u002F\u002Fgithub.com\u002Ffalsr\u002FFALSR) | [Tensorflow]\n\n- [Multi-Objective Reinforced Evolution in Mobile Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.01074) | [2019\u002F01]\n  + [moremnas\u002FMoreMNAS](https:\u002F\u002Fgithub.com\u002Fmoremnas\u002FMoreMNAS) | [Tensorflow]\n\n- [ProxylessNAS: Direct Neural Architecture Search on Target Task and Hardware](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00332) | [**ICLR 2019**]\n  + [MIT-HAN-LAB\u002FProxylessNAS](https:\u002F\u002Fgithub.com\u002FMIT-HAN-LAB\u002FProxylessNAS) | [Pytorch, Tensorflow]\n\n- [Transfer Learning with Neural AutoML](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8056-transfer-learning-with-neural-automl) | [**NIPS 2018**]\n\n- [Learning Transferable Architectures for Scalable Image Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) | [2018\u002F07]\n  + [wandering007\u002Fnasnet-pytorch](https:\u002F\u002Fgithub.com\u002Fwandering007\u002Fnasnet-pytorch) | [Pytorch]\n  + [tensorflow\u002Fmodels\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) | [Tensorflow]\n\n- [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.11626) | [2018\u002F07]\n  + [AnjieZheng\u002FMnasNet-PyTorch](https:\u002F\u002Fgithub.com\u002FAnjieZheng\u002FMnasNet-PyTorch) | [Pytorch]\n\n- [Practical Block-wise Neural Network Architecture Generation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05552) | [**CVPR 2018**]\n\n- [Efficient Neural Architecture Search via Parameter Sharing](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03268) | [**ICML 2018**]\n  + [melodyguan\u002Fenas](https:\u002F\u002Fgithub.com\u002Fmelodyguan\u002Fenas) | [Tensorflow]\n  + [carpedm20\u002FENAS-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch) | [Pytorch]\n  \n- [Efficient Architecture Search by Network Transformation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04873) | [**AAAI 2018**]\n\n**Evolutionary Algorithm:**\n- [Single Path One-Shot Neural Architecture Search with Uniform Sampling](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00420) | [2019\u002F04]\n\n- [DetNAS: Neural Architecture Search on Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.10979) | [2019\u002F03]\n\n- [The Evolved Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.11117) | [2019\u002F01]\n\n- [Designing neural networks through neuroevolution](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-018-0006-z) | [**Nature Machine Intelligence 2019**]\n\n- [EAT-NAS: Elastic Architecture Transfer for Accelerating Large-scale Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05884) | [2019\u002F01]\n\n- [Efficient Multi-objective Neural Architecture Search via Lamarckian Evolution](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09081) | [**ICLR 2019**]\n\n**SMBO:**\n- [MFAS: Multimodal Fusion Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06496) | [**CVPR 2019**]\n\n- [DPP-Net: Device-aware Progressive Search for Pareto-optimal Neural Architectures](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.08198) | [**ECCV 2018**]\n\n- [Progressive Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) | [**ECCV 2018**]\n  + [titu1994\u002Fprogressive-neural-architecture-search](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fprogressive-neural-architecture-search) | [Keras, Tensorflow]\n  + [chenxi116\u002FPNASNet.pytorch](https:\u002F\u002Fgithub.com\u002Fchenxi116\u002FPNASNet.pytorch) | [Pytorch]\n\n**Random Search:**\n- [Exploring Randomly Wired Neural Networks for Image Recognition](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.01569) | [2019\u002F04]\n\n- [Searching for Efficient Multi-Scale Architectures for Dense Image Prediction](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8087-searching-for-efficient-multi-scale-architectures-for-dense-image-prediction) | [**NIPS 2018**]\n\n**Hypernetwork:**\n- [Graph HyperNetworks for Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05749) | [**ICLR 2019**]\n\n**Bayesian Optimization:**\n- [Inductive Transfer for Neural Architecture Optimization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03536) | [2019\u002F03]\n\n**Partial Order Pruning**\n- [Partial Order Pruning: for Best Speed\u002FAccuracy Trade-off in Neural Architecture Search](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03777) | [**CVPR 2019**]\n  + [lixincn2015\u002FPartial-Order-Pruning](https:\u002F\u002Fgithub.com\u002Flixincn2015\u002FPartial-Order-Pruning) | [Caffe]\n\n**Knowledge Distillation**\n- [Improving Neural Architecture Search Image Classifiers via Ensemble Learning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06236) | [2019\u002F03]\n\n### **[Projects]**\n- [Microsoft\u002Fnni](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) | [Python]\n- [MindsDB](https:\u002F\u002Fgithub.com\u002Fmindsdb\u002Fmindsdb) | [Python]\n\n## 2.) Lightweight Structures\n### **[Papers]**  \n**Image Classification:**\n- [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html) | [**ICML 2019**]\n  + [tensorflow\u002Ftpu\u002Fmodels\u002Fofficial\u002Fefficientnet\u002F](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Ftree\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet) | [Tensorflow]\n  + [lukemelas\u002FEfficientNet-PyTorch](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch) | [Pytorch]\n\n- [Searching for MobileNetV3](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) | [2019\u002F05]\n  + [kuan-wang\u002Fpytorch-mobilenet-v3](https:\u002F\u002Fgithub.com\u002Fkuan-wang\u002Fpytorch-mobilenet-v3) | [Pytorch]\n  + [leaderj1001\u002FMobileNetV3-Pytorch](https:\u002F\u002Fgithub.com\u002Fleaderj1001\u002FMobileNetV3-Pytorch) | [Pytorch]\n\n**Semantic Segmentation:**\n- [CGNet: A Light-weight Context Guided Network for Semantic Segmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.08201) | [2019\u002F04]\n  + [wutianyiRosun\u002FCGNet](https:\u002F\u002Fgithub.com\u002FwutianyiRosun\u002FCGNet) | [Pytorch]\n\n- [ESPNetv2: A Light-weight, Power Efficient, and General Purpose Convolutional Neural Network](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11431) | [2018\u002F11]\n  + [sacmehta\u002FESPNetv2](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNetv2) | [Pytorch]\n  \n- [ESPNet: Efficient Spatial Pyramid of Dilated Convolutions for Semantic Segmentation](https:\u002F\u002Fsacmehta.github.io\u002FESPNet\u002F) | [**ECCV 2018**]\n  + [sacmehta\u002FESPNet](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNet\u002F) | [Pytorch]\n\n- [BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00897) | [**ECCV 2018**]\n  + [ooooverflow\u002FBiSeNet](https:\u002F\u002Fgithub.com\u002Fooooverflow\u002FBiSeNet) | [Pytorch]\n  + [ycszen\u002FTorchSeg](https:\u002F\u002Fgithub.com\u002Fycszen\u002FTorchSeg) | [Pytorch]\n  \n- [ERFNet: Efficient Residual Factorized ConvNet for Real-time Semantic Segmentation](http:\u002F\u002Fwww.robesafe.uah.es\u002Fpersonal\u002Feduardo.romera\u002Fpdfs\u002FRomera17tits.pdf) | [**T-ITS 2017**]\n  + [Eromera\u002Ferfnet_pytorch](https:\u002F\u002Fgithub.com\u002FEromera\u002Ferfnet_pytorch) | [Pytorch]\n\n**Object Detection:**\n- [ThunderNet: Towards Real-time Generic Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.11752) | [2019\u002F03]\n\n- [Pooling Pyramid Network for Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03284) | [2018\u002F09]\n  + [tensorflow\u002Fmodels](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fmodels) | [Tensorflow]\n\n- [Tiny-DSOD: Lightweight Object Detection for Resource-Restricted Usages](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.11013) | [**BMVC 2018**]\n  + [lyxok1\u002FTiny-DSOD](https:\u002F\u002Fgithub.com\u002Flyxok1\u002FTiny-DSOD) | [Caffe]\n\n- [Pelee: A Real-Time Object Detection System on Mobile Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.06882) | [**NeurIPS 2018**]\n  + [Robert-JunWang\u002FPelee](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPelee) | [Caffe]\n  + [Robert-JunWang\u002FPeleeNet](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPeleeNet) | [Pytorch]\n\n- [Receptive Field Block Net for Accurate and Fast Object Detection](https:\u002F\u002Feccv2018.org\u002Fopenaccess\u002Fcontent_ECCV_2018\u002Fpapers\u002FSongtao_Liu_Receptive_Field_Block_ECCV_2018_paper.pdf) | [**ECCV 2018**]\n  + [ruinmessi\u002FRFBNet](https:\u002F\u002Fgithub.com\u002Fruinmessi\u002FRFBNet) | [Pytorch]\n  + [ShuangXieIrene\u002Fssds.pytorch](https:\u002F\u002Fgithub.com\u002FShuangXieIrene\u002Fssds.pytorch) | [Pytorch]\n  + [lzx1413\u002FPytorchSSD](https:\u002F\u002Fgithub.com\u002Flzx1413\u002FPytorchSSD) | [Pytorch]\n\n- [FSSD: Feature Fusion Single Shot Multibox Detector](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00960) | [2017\u002F12]\n  + [ShuangXieIrene\u002Fssds.pytorch](https:\u002F\u002Fgithub.com\u002FShuangXieIrene\u002Fssds.pytorch) | [Pytorch]\n  + [lzx1413\u002FPytorchSSD](https:\u002F\u002Fgithub.com\u002Flzx1413\u002FPytorchSSD) | [Pytorch]\n  + [dlyldxwl\u002Ffssd.pytorch](https:\u002F\u002Fgithub.com\u002Fdlyldxwl\u002Ffssd.pytorch) | [Pytorch]\n\n- [Feature Pyramid Networks for Object Detection](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03144) | [**CVPR 2017**]\n  + [tensorflow\u002Fmodels](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fmodels) | [Tensorflow]\n\n## 3.) Model Compression & Acceleration\n### **[Papers]** \n**Pruning:**\n- [The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635) | [**ICLR 2019**]\n  + [google-research\u002Flottery-ticket-hypothesis](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Flottery-ticket-hypothesis) | [Tensorflow]\n\n- [Rethinking the Value of Network Pruning](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05270) | [**ICLR 2019**]\n\n- [Slimmable Neural Networks](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1gMCsAqY7) | [**ICLR 2019**]\n  + [JiahuiYu\u002Fslimmable_networks](https:\u002F\u002Fgithub.com\u002FJiahuiYu\u002Fslimmable_networks) | [Pytorch]\n\n- [AMC: AutoML for Model Compression and Acceleration on Mobile Devices](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03494) | [**ECCV 2018**]\n  + [AutoML for Model Compression (AMC): Trials and Tribulations](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller\u002Fwiki\u002FAutoML-for-Model-Compression-(AMC):-Trials-and-Tribulations) | [Pytorch]\n\n- [Learning Efficient Convolutional Networks through Network Slimming](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06519) | [**ICCV 2017**]\n  + [foolwood\u002Fpytorch-slimming](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002Fpytorch-slimming) | [Pytorch]\n\n- [Channel Pruning for Accelerating Very Deep Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06168) | [**ICCV 2017**]\n  + [yihui-he\u002Fchannel-pruning](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Fchannel-pruning) | [Caffe]\n\n- [Pruning Convolutional Neural Networks for Resource Efficient Inference](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06440) | [**ICLR 2017**]\n  + [jacobgil\u002Fpytorch-pruning](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning) | [Pytorch]\n\n- [Pruning Filters for Efficient ConvNets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.08710) | [**ICLR 2017**]\n\n**Quantization:**\n- [Understanding Straight-Through Estimator in Training Activation Quantized Neural Nets](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.05662) | [**ICLR 2019**]\n\n- [Quantization and Training of Neural Networks for Efficient Integer-Arithmetic-Only Inference](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FJacob_Quantization_and_Training_CVPR_2018_paper.html) | [**CVPR 2018**]\n\n- [Quantizing deep convolutional networks for efficient inference: A whitepaper](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.08342) | [2018\u002F06]\n\n- [PACT: Parameterized Clipping Activation for Quantized Neural Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.06085) | [2018\u002F05]\n\n- [Post-training 4-bit quantization of convolution networks for rapid-deployment](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05723) | [**ICML 2018**]\n\n- [WRPN: Wide Reduced-Precision Networks](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01134) | [**ICLR 2018**]\n\n- [Incremental Network Quantization: Towards Lossless CNNs with Low-Precision Weights](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03044) | [**ICLR 2017**]\n\n- [DoReFa-Net: Training Low Bitwidth Convolutional Neural Networks with Low Bitwidth Gradients](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.06160) | [2016\u002F06]\n\n- [Estimating or Propagating Gradients Through Stochastic Neurons for Conditional Computation](https:\u002F\u002Farxiv.org\u002Fabs\u002F1308.3432) | [2013\u002F08]\n\n**Knowledge Distillation**\n- [Apprentice: Using Knowledge Distillation Techniques To Improve Low-Precision Network Accuracy](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05852) | [**ICLR 2018**]\n\n- [Model compression via distillation and quantization](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05668) | [**ICLR 2018**]\n\n**Acceleration:**\n- [Fast Algorithms for Convolutional Neural Networks](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FLavin_Fast_Algorithms_for_CVPR_2016_paper.pdf) | [**CVPR 2016**]\n  + [andravin\u002Fwincnn](https:\u002F\u002Fgithub.com\u002Fandravin\u002Fwincnn) | [Python]\n\n### **[Projects]**\n- [NervanaSystems\u002Fdistiller](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller\u002F) | [Pytorch]\n- [Tencent\u002FPocketFlow](https:\u002F\u002Fgithub.com\u002FTencent\u002FPocketFlow) | [Tensorflow]\n- [aaron-xichen\u002Fpytorch-playground](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground) | [Pytorch]\n\n### **[Tutorials\u002FBlogs]**\n- [Introducing the CVPR 2018 On-Device Visual Intelligence Challenge](https:\u002F\u002Fresearch.googleblog.com\u002Fsearch\u002Flabel\u002FOn-device%20Learning)\n\n## 4.) Hyperparameter Optimization\n### **[Papers]** \n- [Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06694) | [2019\u002F03]\n  + [dragonfly\u002Fdragonfly](https:\u002F\u002Fgithub.com\u002Fdragonfly\u002Fdragonfly)\n\n- [Efficient High Dimensional Bayesian Optimization with Additivity and Quadrature Fourier Features](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8115-efficient-high-dimensional-bayesian-optimization-with-additivity-and-quadrature-fourier-features) | [**NeurIPS 2018**]\n\n- [Google vizier: A service for black-box optimization](https:\u002F\u002Fstatic.googleusercontent.com\u002Fmedia\u002Fresearch.google.com\u002Fen\u002F\u002Fpubs\u002Farchive\u002F46180.pdf) | [**SIGKDD 2017**]\n\n- [On Hyperparameter Optimization of Machine Learning Algorithms: Theory and Practice](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15745) | [**Neurocomputing 2020**]\n  + [LiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms](https:\u002F\u002Fgithub.com\u002FLiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms)\n\n### **[Projects]**\n- [BoTorch](https:\u002F\u002Fbotorch.org\u002F) | [PyTorch]\n- [Ax (Adaptive Experimentation Platform)](https:\u002F\u002Fax.dev\u002F) | [PyTorch]\n- [Microsoft\u002Fnni](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) | [Python]\n- [dragonfly\u002Fdragonfly](https:\u002F\u002Fgithub.com\u002Fdragonfly\u002Fdragonfly) | [Python]\n- [LiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms](https:\u002F\u002Fgithub.com\u002FLiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms) | [Python]\n\n### **[Tutorials\u002FBlogs]**\n- [Hyperparameter tuning in Cloud Machine Learning Engine using Bayesian Optimization](https:\u002F\u002Fcloud.google.com\u002Fblog\u002Fproducts\u002Fgcp\u002Fhyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization)\n\n- [Overview of Bayesian Optimization](https:\u002F\u002Fsoubhikbarari.github.io\u002Fblog\u002F2016\u002F09\u002F14\u002Foverview-of-bayesian-optimization)\n\n- [Bayesian optimization](http:\u002F\u002Fkrasserm.github.io\u002F2018\u002F03\u002F21\u002Fbayesian-optimization\u002F)\n  + [krasserm\u002Fbayesian-machine-learning](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fbayesian-machine-learning) | [Python]\n\n## 5.) Automated Feature Engineering\n\n## Model Analyzer\n- [Netscope CNN Analyzer](https:\u002F\u002Fchakkritte.github.io\u002Fnetscope\u002Fquickstart.html) | [Caffe]\n\n- [sksq96\u002Fpytorch-summary](https:\u002F\u002Fgithub.com\u002Fsksq96\u002Fpytorch-summary) | [Pytorch]\n\n- [Lyken17\u002Fpytorch-OpCounter](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter) | [Pytorch]\n\n- [sovrasov\u002Fflops-counter.pytorch](https:\u002F\u002Fgithub.com\u002Fsovrasov\u002Fflops-counter.pytorch) | [Pytorch]\n\n## References\n- [LITERATURE ON NEURAL ARCHITECTURE SEARCH](https:\u002F\u002Fwww.ml4aad.org\u002Fautoml\u002Fliterature-on-neural-architecture-search\u002F)\n- [handong1587\u002Fhandong1587.github.io](https:\u002F\u002Fgithub.com\u002Fhandong1587\u002Fhandong1587.github.io\u002Ftree\u002Fmaster\u002F_posts\u002Fdeep_learning)\n- [hibayesian\u002Fawesome-automl-papers](https:\u002F\u002Fgithub.com\u002Fhibayesian\u002Fawesome-automl-papers)\n- [mrgloom\u002Fawesome-semantic-segmentation](https:\u002F\u002Fgithub.com\u002Fmrgloom\u002Fawesome-semantic-segmentation)\n- [amusi\u002Fawesome-object-detection](https:\u002F\u002Fgithub.com\u002Famusi\u002Fawesome-object-detection)\n","# 令人惊叹的AutoML与轻量级模型\n一份高质量（最新）的AutoML研究成果和轻量级模型列表，包括**1.) 神经架构搜索**、**2.) 轻量级网络结构**、**3.) 模型压缩、量化与加速**、**4.) 超参数优化**、**5.) 自动化特征工程**。\n\n本仓库旨在为AutoML研究（尤其是轻量级模型）提供信息。欢迎提交未收录的相关工作（论文、代码库）以供补充。\n\n## 1.) 神经架构搜索\n\n### **[论文]**   \n**梯度法：**\n- [当NAS遇到鲁棒性：寻找对抗攻击下的鲁棒架构](https:\u002F\u002Farxiv.org\u002Fabs\u002F1911.10695) | [**CVPR 2020**]\n  + [gmh14\u002FRobNets](https:\u002F\u002Fgithub.com\u002Fgmh14\u002FRobNets) | [Pytorch]\n\n- [在四 GPU 小时内搜索鲁棒神经网络架构](https:\u002F\u002Fxuanyidong.com\u002Fpublication\u002Fcvpr-2019-gradient-based-diff-sampler\u002F) | [**CVPR 2019**]\n  + [D-X-Y\u002FGDAS](https:\u002F\u002Fgithub.com\u002FD-X-Y\u002FGDAS) | [Pytorch]\n\n- [ASAP：架构搜索、退火与剪枝](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.04123) | [2019年4月]\n\n- [单路径 NAS：在不到4小时内设计硬件高效的卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02877) | [2019年4月]\n  + [dstamoulis\u002Fsingle-path-nas](https:\u002F\u002Fgithub.com\u002Fdstamoulis\u002Fsingle-path-nas) | [Tensorflow]\n\n- [面向不同场景下图像分类的自动卷积神经网络架构搜索](https:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F8676019) | [**IEEE Access 2019**]\n\n- [sharpDARTS：更快更精确的可微架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.09900) | [2019年3月]\n\n- [通过参数共享隐式学习循环卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.09701) | [**ICLR 2019**]\n  + [lolemacs\u002Fsoft-sharing](https:\u002F\u002Fgithub.com\u002Flolemacs\u002Fsoft-sharing) | [Pytorch]\n\n- [概率神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1902.05116) | [2019年2月]\n\n- [Auto-DeepLab：用于语义分割的层次化神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.02985) | [2019年1月]\n\n- [SNAS：随机神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.09926) | [**ICLR 2019**]\n\n- [FBNet：基于可微架构搜索的硬件感知高效卷积网络设计](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.03443) | [2018年12月]\n\n- [神经架构优化](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8007-neural-architecture-optimization) | [**NIPS 2018**]\n  + [renqianluo\u002FNAO](https:\u002F\u002Fgithub.com\u002Frenqianluo\u002FNAO) | [Tensorflow]\n\n- [DARTS：可微架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.09055) | [2018年6月]\n  + [quark0\u002Fdarts](https:\u002F\u002Fgithub.com\u002Fquark0\u002Fdarts) | [Pytorch]\n  + [khanrc\u002Fpt.darts](https:\u002F\u002Fgithub.com\u002Fkhanrc\u002Fpt.darts) | [Pytorch]\n  + [dragen1860\u002FDARTS-PyTorch](https:\u002F\u002Fgithub.com\u002Fdragen1860\u002FDARTS-PyTorch) | [Pytorch]\n\n**强化学习：**\n- [基于模板的紧凑语义分割架构自动搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.02365) | [2019年4月]\n\n- [理解神经架构搜索技术](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00438) | [2019年3月]\n\n- [利用神经架构搜索实现快速、准确且轻量级的超分辨率](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.07261) | [2019年1月]\n  + [falsr\u002FFALSR](https:\u002F\u002Fgithub.com\u002Ffalsr\u002FFALSR) | [Tensorflow]\n\n- [移动设备上的多目标强化进化神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.01074) | [2019年1月]\n  + [moremnas\u002FMoreMNAS](https:\u002F\u002Fgithub.com\u002Fmoremnas\u002FMoreMNAS) | [Tensorflow]\n\n- [ProxylessNAS：直接在目标任务和硬件上进行神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1812.00332) | [**ICLR 2019**]\n  + [MIT-HAN-LAB\u002FProxylessNAS](https:\u002F\u002Fgithub.com\u002FMIT-HAN-LAB\u002FProxylessNAS) | [Pytorch, Tensorflow]\n\n- [使用神经 AutoML 进行迁移学习](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8056-transfer-learning-with-neural-automl) | [**NIPS 2018**]\n\n- [学习可迁移的架构以实现可扩展的图像识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.07012) | [2018年7月]\n  + [wandering007\u002Fnasnet-pytorch](https:\u002F\u002Fgithub.com\u002Fwandering007\u002Fnasnet-pytorch) | [Pytorch]\n  + [tensorflow\u002Fmodels\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fslim\u002Fnets\u002Fnasnet) | [Tensorflow]\n\n- [MnasNet：面向移动端的平台感知神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.11626) | [2018年7月]\n  + [AnjieZheng\u002FMnasNet-PyTorch](https:\u002F\u002Fgithub.com\u002FAnjieZheng\u002FMnasNet-PyTorch) | [Pytorch]\n\n- [实用的分块式神经网络架构生成](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.05552) | [**CVPR 2018**]\n\n- [通过参数共享实现高效的神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03268) | [**ICML 2018**]\n  + [melodyguan\u002Fenas](https:\u002F\u002Fgithub.com\u002Fmelodyguan\u002Fenas) | [Tensorflow]\n  + [carpedm20\u002FENAS-pytorch](https:\u002F\u002Fgithub.com\u002Fcarpedm20\u002FENAS-pytorch) | [Pytorch]\n\n- [通过网络变换实现高效的架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.04873) | [**AAAI 2018**]\n\n**进化算法：**\n- [采用均匀采样的单路径一次性神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.00420) | [2019年4月]\n\n- [DetNAS：目标检测领域的神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.10979) | [2019年3月]\n\n- [进化的 Transformer](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.11117) | [2019年1月]\n\n- [通过神经进化设计神经网络](https:\u002F\u002Fwww.nature.com\u002Farticles\u002Fs42256-018-0006-z) | [**Nature Machine Intelligence 2019**]\n\n- [EAT-NAS：弹性架构迁移加速大规模神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1901.05884) | [2019年1月]\n\n- [通过拉马克式进化实现高效的多目标神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.09081) | [**ICLR 2019**]\n\n**SMBO：**\n- [MFAS：多模态融合架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06496) | [**CVPR 2019**]\n\n- [DPP-Net：设备感知的帕累托最优神经架构渐进式搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.08198) | [**ECCV 2018**]\n\n- [渐进式神经架构搜索](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00559) | [**ECCV 2018**]\n  + [titu1994\u002Fprogressive-neural-architecture-search](https:\u002F\u002Fgithub.com\u002Ftitu1994\u002Fprogressive-neural-architecture-search) | [Keras, Tensorflow]\n  + [chenxi116\u002FPNASNet.pytorch](https:\u002F\u002Fgithub.com\u002Fchenxi116\u002FPNASNet.pytorch) | [Pytorch]\n\n**随机搜索：**\n- [探索随机连接的神经网络用于图像识别](https:\u002F\u002Farxiv.org\u002Fabs\u002F1904.01569) | [2019年4月]\n\n- [为密集图像预测搜索高效的多尺度架构](http:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8087-searching-for-efficient-multi-scale-architectures-for-dense-image-prediction) | [**NIPS 2018**]\n\n**超网络：**\n- [用于神经架构搜索的图超网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05749) | [**ICLR 2019**]\n\n**贝叶斯优化：**\n- [归纳迁移用于神经架构优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03536) | [2019年3月]\n\n**偏序剪枝：**\n- [偏序剪枝：用于神经架构搜索中最佳速度\u002F精度权衡](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.03777) | [**CVPR 2019**]\n  + [lixincn2015\u002FPartial-Order-Pruning](https:\u002F\u002Fgithub.com\u002Flixincn2015\u002FPartial-Order-Pruning) | [Caffe]\n\n**知识蒸馏：**\n- [通过集成学习改进神经架构搜索图像分类器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06236) | [2019年3月]\n\n### **[项目]**\n- [Microsoft\u002Fnni](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) | [Python]\n- [MindsDB](https:\u002F\u002Fgithub.com\u002Fmindsdb\u002Fmindsdb) | [Python]\n\n## 2.) 轻量化结构\n\n### **[论文]**  \n**图像分类：**\n- [EfficientNet：重新思考卷积神经网络的模型缩放](http:\u002F\u002Fproceedings.mlr.press\u002Fv97\u002Ftan19a.html) | [**ICML 2019**]\n  + [tensorflow\u002Ftpu\u002Fmodels\u002Fofficial\u002Fefficientnet\u002F](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Ftpu\u002Ftree\u002Fmaster\u002Fmodels\u002Fofficial\u002Fefficientnet) | [TensorFlow]\n  + [lukemelas\u002FEfficientNet-PyTorch](https:\u002F\u002Fgithub.com\u002Flukemelas\u002FEfficientNet-PyTorch) | [PyTorch]\n\n- [搜索MobileNetV3](https:\u002F\u002Farxiv.org\u002Fabs\u002F1905.02244) | [2019年5月]\n  + [kuan-wang\u002Fpytorch-mobilenet-v3](https:\u002F\u002Fgithub.com\u002Fkuan-wang\u002Fpytorch-mobilenet-v3) | [PyTorch]\n  + [leaderj1001\u002FMobileNetV3-Pytorch](https:\u002F\u002Fgithub.com\u002Fleaderj1001\u002FMobileNetV3-Pytorch) | [PyTorch]\n\n**语义分割：**\n- [CGNet：一种用于语义分割的轻量级上下文引导网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.08201) | [2019年4月]\n  + [wutianyiRosun\u002FCGNet](https:\u002F\u002Fgithub.com\u002FwutianyiRosun\u002FCGNet) | [PyTorch]\n\n- [ESPNetv2：一种轻量级、高效且通用的卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1811.11431) | [2018年11月]\n  + [sacmehta\u002FESPNetv2](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNetv2) | [PyTorch]\n\n- [ESPNet：用于语义分割的高效空洞卷积空间金字塔](https:\u002F\u002Fsacmehta.github.io\u002FESPNet\u002F) | [**ECCV 2018**]\n  + [sacmehta\u002FESPNet](https:\u002F\u002Fgithub.com\u002Fsacmehta\u002FESPNet) | [PyTorch]\n\n- [BiSeNet：用于实时语义分割的双边分割网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1808.00897) | [**ECCV 2018**]\n  + [ooooverflow\u002FBiSeNet](https:\u002F\u002Fgithub.com\u002Fooooverflow\u002FBiSeNet) | [PyTorch]\n  + [ycszen\u002FTorchSeg](https:\u002F\u002Fgithub.com\u002Fycszen\u002FTorchSeg) | [PyTorch]\n\n- [ERFNet：用于实时语义分割的高效残差因子化卷积网络](http:\u002F\u002Fwww.robesafe.uah.es\u002Fpersonal\u002Feduardo.romera\u002Fpdfs\u002FRomera17tits.pdf) | [**T-ITS 2017**]\n  + [Eromera\u002Ferfnet_pytorch](https:\u002F\u002Fgithub.com\u002FEromera\u002Ferfnet_pytorch) | [PyTorch]\n\n**目标检测：**\n- [ThunderNet：迈向实时通用目标检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.11752) | [2019年3月]\n\n- [用于目标检测的池化金字塔网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.03284) | [2018年9月]\n  + [tensorflow\u002Fmodels](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fmodels) | [TensorFlow]\n\n- [Tiny-DSOD：面向资源受限场景的轻量级目标检测](https:\u002F\u002Farxiv.org\u002Fabs\u002F1807.11013) | [**BMVC 2018**]\n  + [lyxok1\u002FTiny-DSOD](https:\u002F\u002Fgithub.com\u002Flyxok1\u002FTiny-DSOD) | [Caffe]\n\n- [Pelee：移动端上的实时目标检测系统](https:\u002F\u002Farxiv.org\u002Fabs\u002F1804.06882) | [**NeurIPS 2018**]\n  + [Robert-JunWang\u002FPelee](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPelee) | [Caffe]\n  + [Robert-JunWang\u002FPeleeNet](https:\u002F\u002Fgithub.com\u002FRobert-JunWang\u002FPeleeNet) | [PyTorch]\n\n- [感受野块网络：用于准确快速的目标检测](https:\u002F\u002Feccv2018.org\u002Fopenaccess\u002Fcontent_ECCV_2018\u002Fpapers\u002FSongtao_Liu_Receptive_Field_Block_ECCV_2018_paper.pdf) | [**ECCV 2018**]\n  + [ruinmessi\u002FRFBNet](https:\u002F\u002Fgithub.com\u002Fruinmessi\u002FRFBNet) | [PyTorch]\n  + [ShuangXieIrene\u002Fssds.pytorch](https:\u002F\u002Fgithub.com\u002FShuangXieIrene\u002Fssds.pytorch) | [PyTorch]\n  + [lzx1413\u002FPytorchSSD](https:\u002F\u002Fgithub.com\u002Flzx1413\u002FPytorchSSD) | [PyTorch]\n\n- [FSSD：特征融合单次多框检测器](https:\u002F\u002Farxiv.org\u002Fabs\u002F1712.00960) | [2017年12月]\n  + [ShuangXieIrene\u002Fssds.pytorch](https:\u002F\u002Fgithub.com\u002FShuangXieIrene\u002Fssds.pytorch) | [PyTorch]\n  + [lzx1413\u002FPytorchSSD](https:\u002F\u002Fgithub.com\u002Flzx1413\u002FPytorchSSD) | [PyTorch]\n  + [dlyldxwl\u002Ffssd.pytorch](https:\u002F\u002Fgithub.com\u002Fdlyldxwl\u002Ffssd.pytorch) | [PyTorch]\n\n- [用于目标检测的特征金字塔网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1612.03144) | [**CVPR 2017**]\n  + [tensorflow\u002Fmodels](https:\u002F\u002Fgithub.com\u002Ftensorflow\u002Fmodels\u002Ftree\u002Fmaster\u002Fresearch\u002Fobject_detection\u002Fmodels) | [TensorFlow]\n\n## 3.) 模型压缩与加速\n\n### **[论文]** \n**剪枝：**\n- [彩票假设：寻找稀疏且可训练的神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1803.03635) | [**ICLR 2019**]\n  + [google-research\u002Flottery-ticket-hypothesis](https:\u002F\u002Fgithub.com\u002Fgoogle-research\u002Flottery-ticket-hypothesis) | [TensorFlow]\n\n- [重新思考网络剪枝的价值](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05270) | [**ICLR 2019**]\n\n- [可瘦身神经网络](https:\u002F\u002Fopenreview.net\u002Fpdf?id=H1gMCsAqY7) | [**ICLR 2019**]\n  + [JiahuiYu\u002Fslimmable_networks](https:\u002F\u002Fgithub.com\u002FJiahuiYu\u002Fslimmable_networks) | [PyTorch]\n\n- [AMC：用于移动设备上模型压缩与加速的AutoML](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.03494) | [**ECCV 2018**]\n  + [AutoML用于模型压缩（AMC）：经验与挑战](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller\u002Fwiki\u002FAutoML-for-Model-Compression-(AMC):-Trials-and-Tribulations) | [PyTorch]\n\n- [通过网络瘦身学习高效的卷积网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1708.06519) | [**ICCV 2017**]\n  + [foolwood\u002Fpytorch-slimming](https:\u002F\u002Fgithub.com\u002Ffoolwood\u002Fpytorch-slimming) | [PyTorch]\n\n- [用于加速超深神经网络的通道剪枝](https:\u002F\u002Farxiv.org\u002Fabs\u002F1707.06168) | [**ICCV 2017**]\n  + [yihui-he\u002Fchannel-pruning](https:\u002F\u002Fgithub.com\u002Fyihui-he\u002Fchannel-pruning) | [Caffe]\n\n- [为资源高效推理而剪枝卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1611.06440) | [**ICLR 2017**]\n  + [jacobgil\u002Fpytorch-pruning](https:\u002F\u002Fgithub.com\u002Fjacobgil\u002Fpytorch-pruning) | [PyTorch]\n\n- [用于高效ConvNet的滤波器剪枝](https:\u002F\u002Farxiv.org\u002Fabs\u002F1608.08710) | [**ICLR 2017**]\n\n**量化：**\n- [理解激活量化的神经网络训练中的直通估计量](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.05662) | [**ICLR 2019**]\n\n- [面向仅使用整数运算高效推理的神经网络量化与训练](http:\u002F\u002Fopenaccess.thecvf.com\u002Fcontent_cvpr_2018\u002Fhtml\u002FJacob_Quantization_and_Training_CVPR_2018_paper.html) | [**CVPR 2018**]\n\n- [深度卷积网络的量化以实现高效推理：白皮书](https:\u002F\u002Farxiv.org\u002Fabs\u002F1806.08342) | [2018年6月]\n\n- [PACT：用于量化神经网络的参数化裁剪激活](https:\u002F\u002Farxiv.org\u002Fabs\u002F1805.06085) | [2018年5月]\n\n- [用于快速部署的卷积网络训练后4位量化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1810.05723) | [**ICML 2018**]\n\n- [WRPN：宽幅低精度网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1709.01134) | [**ICLR 2018**]\n\n- [增量式网络量化：迈向权重低精度的无损CNN](https:\u002F\u002Farxiv.org\u002Fabs\u002F1702.03044) | [**ICLR 2017**]\n\n- [DoReFa-Net：使用低比特梯度训练低比特宽度卷积神经网络](https:\u002F\u002Farxiv.org\u002Fabs\u002F1606.06160) | [2016年6月]\n\n- [通过随机神经元估计或传播梯度以实现条件计算](https:\u002F\u002Farxiv.org\u002Fabs\u002F1308.3432) | [2013年8月]\n\n**知识蒸馏**\n- [学徒：利用知识蒸馏技术提升低精度网络精度](https:\u002F\u002Farxiv.org\u002Fabs\u002F1711.05852) | [**ICLR 2018**]\n\n- [通过蒸馏和量化进行模型压缩](https:\u002F\u002Farxiv.org\u002Fabs\u002F1802.05668) | [**ICLR 2018**]\n\n**加速：**\n- [卷积神经网络的快速算法](https:\u002F\u002Fwww.cv-foundation.org\u002Fopenaccess\u002Fcontent_cvpr_2016\u002Fpapers\u002FLavin_Fast_Algorithms_for_CVPR_2016_paper.pdf) | [**CVPR 2016**]\n  + [andravin\u002Fwincnn](https:\u002F\u002Fgithub.com\u002Fandravin\u002Fwincnn) | [Python]\n\n### **[项目]**\n- [NervanaSystems\u002Fdistiller](https:\u002F\u002Fgithub.com\u002FNervanaSystems\u002Fdistiller\u002F) | [PyTorch]\n- [Tencent\u002FPocketFlow](https:\u002F\u002Fgithub.com\u002FTencent\u002FPocketFlow) | [TensorFlow]\n- [aaron-xichen\u002Fpytorch-playground](https:\u002F\u002Fgithub.com\u002Faaron-xichen\u002Fpytorch-playground) | [PyTorch]\n\n### **[教程\u002F博客]**\n- [介绍CVPR 2018设备端视觉智能挑战赛](https:\u002F\u002Fresearch.googleblog.com\u002Fsearch\u002Flabel\u002FOn-device%20Learning)\n\n## 4.) 超参数优化\n### **[论文]** \n- [无需研究生即可调优超参数：使用Dragonfly实现可扩展且鲁棒的贝叶斯优化](https:\u002F\u002Farxiv.org\u002Fabs\u002F1903.06694) | [2019年3月]\n  + [dragonfly\u002Fdragonfly](https:\u002F\u002Fgithub.com\u002Fdragonfly\u002Fdragonfly)\n\n- [利用加性与求积傅里叶特征的高效高维贝叶斯优化](https:\u002F\u002Fpapers.nips.cc\u002Fpaper\u002F8115-efficient-high-dimensional-bayesian-optimization-with-additivity-and-quadrature-fourier-features) | [**NeurIPS 2018**]\n\n- [Google Vizier：黑盒优化服务](https:\u002F\u002Fstatic.googleusercontent.com\u002Fmedia\u002Fresearch.google.com\u002Fen\u002F\u002Fpubs\u002Farchive\u002F46180.pdf) | [**SIGKDD 2017**]\n\n- [机器学习算法的超参数优化：理论与实践](https:\u002F\u002Farxiv.org\u002Fabs\u002F2007.15745) | [**Neurocomputing 2020**]\n  + [LiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms](https:\u002F\u002Fgithub.com\u002FLiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms)\n\n### **[项目]**\n- [BoTorch](https:\u002F\u002Fbotorch.org\u002F) | [PyTorch]\n- [Ax（自适应实验平台）](https:\u002F\u002Fax.dev\u002F) | [PyTorch]\n- [Microsoft\u002Fnni](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni) | [Python]\n- [dragonfly\u002Fdragonfly](https:\u002F\u002Fgithub.com\u002Fdragonfly\u002Fdragonfly) | [Python]\n- [LiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms](https:\u002F\u002Fgithub.com\u002FLiYangHart\u002FHyperparameter-Optimization-of-Machine-Learning-Algorithms) | [Python]\n\n### **[教程\u002F博客]**\n- [使用贝叶斯优化在Cloud Machine Learning Engine中调优超参数](https:\u002F\u002Fcloud.google.com\u002Fblog\u002Fproducts\u002Fgcp\u002Fhyperparameter-tuning-cloud-machine-learning-engine-using-bayesian-optimization)\n\n- [贝叶斯优化概述](https:\u002F\u002Fsoubhikbarari.github.io\u002Fblog\u002F2016\u002F09\u002F14\u002Foverview-of-bayesian-optimization)\n\n- [贝叶斯优化](http:\u002F\u002Fkrasserm.github.io\u002F2018\u002F03\u002F21\u002Fbayesian-optimization\u002F)\n  + [krasserm\u002Fbayesian-machine-learning](https:\u002F\u002Fgithub.com\u002Fkrasserm\u002Fbayesian-machine-learning) | [Python]\n\n## 5.) 自动化特征工程\n\n## 模型分析工具\n- [Netscope CNN分析器](https:\u002F\u002Fchakkritte.github.io\u002Fnetscope\u002Fquickstart.html) | [Caffe]\n\n- [sksq96\u002Fpytorch-summary](https:\u002F\u002Fgithub.com\u002Fsksq96\u002Fpytorch-summary) | [PyTorch]\n\n- [Lyken17\u002Fpytorch-OpCounter](https:\u002F\u002Fgithub.com\u002FLyken17\u002Fpytorch-OpCounter) | [PyTorch]\n\n- [sovrasov\u002Fflops-counter.pytorch](https:\u002F\u002Fgithub.com\u002Fsovrasov\u002Fflops-counter.pytorch) | [PyTorch]\n\n## 参考文献\n- [神经架构搜索相关文献](https:\u002F\u002Fwww.ml4aad.org\u002Fautoml\u002Fliterature-on-neural-architecture-search\u002F)\n- [handong1587\u002Fhandong1587.github.io](https:\u002F\u002Fgithub.com\u002Fhandong1587\u002Fhandong1587.github.io\u002Ftree\u002Fmaster\u002F_posts\u002Fdeep_learning)\n- [hibayesian\u002Fawesome-automl-papers](https:\u002F\u002Fgithub.com\u002Fhibayesian\u002Fawesome-automl-papers)\n- [mrgloom\u002Fawesome-semantic-segmentation](https:\u002F\u002Fgithub.com\u002Fmrgloom\u002Fawesome-semantic-segmentation)\n- [amusi\u002Fawesome-object-detection](https:\u002F\u002Fgithub.com\u002Famusi\u002Fawesome-object-detection)","# Awesome-AutoML-and-Lightweight-Models 快速上手指南\n\n本仓库是一个高质量的 AutoML（自动机器学习）与轻量级模型资源合集，涵盖神经架构搜索 (NAS)、轻量化结构、模型压缩\u002F量化\u002F加速、超参数优化及自动化特征工程等领域。由于这是一个**资源列表（Awesome List）**而非单一软件包，以下指南将指导你如何获取资源、配置环境并运行其中典型的开源项目。\n\n## 环境准备\n\n在开始之前，请确保你的开发环境满足以下基本要求。大多数列出的项目基于 Python 深度学习框架。\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+), macOS, 或 Windows (需 WSL2)\n*   **Python 版本**: 3.6 - 3.9 (具体版本视所选子项目而定)\n*   **GPU 支持**: 建议配备 NVIDIA GPU 并安装对应的 CUDA 和 cuDNN，因为 NAS 和模型训练对算力要求较高。\n*   **前置依赖**:\n    *   Git\n    *   PyTorch 或 TensorFlow (根据你选择的具体论文代码决定)\n    *   Conda (推荐用于管理虚拟环境)\n\n> **国内加速建议**：\n> *   **Git 克隆加速**：若 `github.com` 访问缓慢，可使用国内镜像源（如 Gitee 上的同步镜像，若有）或配置代理。\n> *   **Python 包下载**：使用清华源或阿里源安装依赖。\n>     ```bash\n>     pip config set global.index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n>     ```\n\n## 安装步骤\n\n由于该仓库包含数十个独立项目，没有统一的安装命令。请以经典的 **DARTS** (Differentiable Architecture Search) 或 **EfficientNet** 为例进行安装。\n\n### 1. 克隆仓库浏览资源\n首先克隆主仓库以查看完整列表和最新论文链接：\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FCharlieZC\u002FAwesome-AutoML-and-Lightweight-Models.git\ncd Awesome-AutoML-and-Lightweight-Models\n```\n\n### 2. 创建虚拟环境\n建议使用 Conda 创建隔离环境：\n```bash\nconda create -n automl_env python=3.8\nconda activate automl_env\n```\n\n### 3. 安装具体项目示例 (以 DARTS 为例)\n从列表中选择一个项目（例如 `quark0\u002Fdarts`），进入其目录或单独克隆：\n\n```bash\n# 克隆具体的 DARTS 实现\ngit clone https:\u002F\u002Fgithub.com\u002Fquark0\u002Fdarts.git\ncd darts\n\n# 安装 PyTorch (根据CUDA版本选择，此处以CPU版为例，生产环境请替换为CUDA版)\npip install torch torchvision\n\n# 安装项目特定依赖\npip install -r requirements.txt\n```\n\n> **注意**：不同项目依赖不同。若选择 TensorFlow 项目（如 `EfficientNet` 官方实现），请执行 `pip install tensorflow`。\n\n## 基本使用\n\n以下展示如何运行一个典型的 NAS 搜索任务（以 DARTS 为例）以及如何加载一个轻量级模型（以 EfficientNet 为例）。\n\n### 场景一：运行神经架构搜索 (NAS)\n在 `darts` 项目中，执行搜索命令以在 CIFAR-10 数据集上寻找最优架构：\n\n```bash\n# 进入搜索脚本目录\ncd darts\u002Fcifar10\n\n# 运行搜索过程 (默认使用 GPU)\npython search.py --arch_weight_lr 0.025 --batch_size 256\n```\n*搜索完成后，生成的架构参数通常保存在 `weights\u002F` 目录下。*\n\n### 场景二：使用轻量级模型进行推理\n若你选择了 `EfficientNet` (PyTorch 版本)，可以通过以下代码快速加载预训练模型进行分类：\n\n```python\nimport torch\nfrom efficientnet_pytorch import EfficientNet\n\n# 加载预训练的 EfficientNet-B0 模型\nmodel = EfficientNet.from_pretrained('efficientnet-b0')\n\n# 准备输入图像 (假设已预处理为 tensor)\n# img_tensor = ... \n\n# 推理\nmodel.eval()\nwith torch.no_grad():\n    outputs = model(img_tensor)\n    predicted_class = outputs.argmax(dim=1)\n\nprint(f\"Predicted Class: {predicted_class}\")\n```\n\n### 场景三：使用通用 AutoML 框架 (Microsoft NNI)\n列表中包含 [Microsoft\u002Fnni](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002Fnni)，这是一个通用的自动机器学习框架，支持多种 NAS 算法。\n\n1.  **安装 NNI**:\n    ```bash\n    pip install nni\n    ```\n2.  **运行示例实验**:\n    ```bash\n    # 运行一个简单的 MNIST NAS 示例\n    nnictl create_and_train examples\u002Ftrials\u002Fmnist-pytorch\u002Fconfig.yml\n    ```\n3.  **查看结果**:\n    命令执行后会输出一个 URL，在浏览器中打开即可可视化搜索过程和最佳模型结构。\n\n---\n**提示**：本仓库核心价值在于其整理的论文与对应代码链接。开发者应根据具体任务需求（如“移动端检测”或“语义分割”），在 README 的对应分类下找到最合适的论文仓库，然后参照该子项目的具体文档进行操作。","某边缘计算团队正致力于将高精度缺陷检测模型部署到算力受限的工业摄像头中，面临模型过大与推理延迟高的双重挑战。\n\n### 没有 Awesome-AutoML-and-Lightweight-Models 时\n- **架构设计盲目**：工程师只能凭经验手动调整网络层数与卷积核大小，反复试错数周仍难以在精度与速度间找到平衡点。\n- **压缩技术分散**：模型剪枝、量化和加速算法散落在不同论文中，缺乏系统整理，导致技术选型困难且复现成本极高。\n- **硬件适配低效**：由于缺少针对特定硬件（如 ARM 芯片）的感知搜索方案，模型在服务器训练表现良好，但在端侧设备推理延迟远超预期。\n- **超参调优耗时**：人工网格搜索超参数效率低下，消耗大量 GPU 资源却往往陷入局部最优，无法自动化挖掘最佳配置。\n\n### 使用 Awesome-AutoML-and-Lightweight-Models 后\n- **智能架构搜索**：直接复用库中收录的 ProxylessNAS 或 FBNet 等方案，快速搜索出专为该工业场景定制的轻量化网络，研发周期从数周缩短至几天。\n- **技术一站式获取**：通过分类清晰的列表，迅速定位并集成最新的模型压缩与量化代码，大幅降低了算法落地门槛。\n- **硬件感知优化**：利用库中硬件感知的神经架构搜索（Hardware-Aware NAS）工作，确保生成的模型在目标摄像头芯片上实现毫秒级推理。\n- **自动化调优**：应用收录的自动超参数优化与特征工程方法，以极少的人工干预自动获得比人工调优更优的模型性能。\n\nAwesome-AutoML-and-Lightweight-Models 通过聚合前沿的自动化搜索与轻量化技术，让资源受限场景下的高性能模型部署变得高效且可执行。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fguan-yuan_Awesome-AutoML-and-Lightweight-Models_613ff0bf.png","guan-yuan","GYChen","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fguan-yuan_cbd82840.jpg","AILAB, National Tsing Hua University",null,"Hsinchu","gychen.tech@gmail.com","https:\u002F\u002Fgithub.com\u002Fguan-yuan",856,161,"2026-03-23T21:25:44",4,"未说明","未说明（部分列出的项目如 NAS 和轻量级模型训练通常建议或需要 NVIDIA GPU，但具体型号和显存未在文档中规定）",{"notes":29,"python":26,"dependencies":30},"该仓库是一个资源列表（Awesome List），汇集了 AutoML 和轻量级模型相关的论文与开源项目代码链接，本身不是一个单一的可执行工具。因此没有统一的运行环境要求。具体的环境需求（如操作系统、GPU、Python 版本、依赖库版本等）需参考列表中各个独立项目（如 DARTS, EfficientNet, ProxylessNAS 等）的原始仓库说明。列出的主要框架包括 PyTorch, TensorFlow, Caffe 和 Keras。",[31,32,33,34,35],"Pytorch","Tensorflow","Caffe","Keras","Python",[37],"开发框架",[39,40,41,42,43,44,45,46,47,48,49,50,51,52,53],"automl","meta-learning","automated-feature-engineering","hyperparameter-optimization","architecture-search","model-compression","model-acceleration","awesome-list","neural-architecture-search","nas","pytorch","quantization","quantized-neural-network","quantized-training","tensorflow",2,"ready","2026-03-27T02:49:30.150509","2026-04-09T12:57:38.511646",[],[],[61,73,81,90,98,107],{"id":62,"name":63,"github_repo":64,"description_zh":65,"stars":66,"difficulty_score":67,"last_commit_at":68,"category_tags":69,"status":55},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[70,37,71,72],"Agent","图像","数据工具",{"id":74,"name":75,"github_repo":76,"description_zh":77,"stars":78,"difficulty_score":67,"last_commit_at":79,"category_tags":80,"status":55},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[37,71,70],{"id":82,"name":83,"github_repo":84,"description_zh":85,"stars":86,"difficulty_score":54,"last_commit_at":87,"category_tags":88,"status":55},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",146793,"2026-04-08T23:32:35",[37,70,89],"语言模型",{"id":91,"name":92,"github_repo":93,"description_zh":94,"stars":95,"difficulty_score":54,"last_commit_at":96,"category_tags":97,"status":55},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108111,"2026-04-08T11:23:26",[37,71,70],{"id":99,"name":100,"github_repo":101,"description_zh":102,"stars":103,"difficulty_score":54,"last_commit_at":104,"category_tags":105,"status":55},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[106,37],"插件",{"id":108,"name":109,"github_repo":110,"description_zh":111,"stars":112,"difficulty_score":67,"last_commit_at":113,"category_tags":114,"status":55},4487,"LLMs-from-scratch","rasbt\u002FLLMs-from-scratch","LLMs-from-scratch 是一个基于 PyTorch 的开源教育项目，旨在引导用户从零开始一步步构建一个类似 ChatGPT 的大型语言模型（LLM）。它不仅是同名技术著作的官方代码库，更提供了一套完整的实践方案，涵盖模型开发、预训练及微调的全过程。\n\n该项目主要解决了大模型领域“黑盒化”的学习痛点。许多开发者虽能调用现成模型，却难以深入理解其内部架构与训练机制。通过亲手编写每一行核心代码，用户能够透彻掌握 Transformer 架构、注意力机制等关键原理，从而真正理解大模型是如何“思考”的。此外，项目还包含了加载大型预训练权重进行微调的代码，帮助用户将理论知识延伸至实际应用。\n\nLLMs-from-scratch 特别适合希望深入底层原理的 AI 开发者、研究人员以及计算机专业的学生。对于不满足于仅使用 API，而是渴望探究模型构建细节的技术人员而言，这是极佳的学习资源。其独特的技术亮点在于“循序渐进”的教学设计：将复杂的系统工程拆解为清晰的步骤，配合详细的图表与示例，让构建一个虽小但功能完备的大模型变得触手可及。无论你是想夯实理论基础，还是为未来研发更大规模的模型做准备",90106,"2026-04-06T11:19:32",[89,71,70,37]]