[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-thoughtworksarts--EmoPy":3,"tool-thoughtworksarts--EmoPy":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",150037,2,"2026-04-10T23:33:47",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":76,"owner_email":77,"owner_twitter":78,"owner_website":79,"owner_url":80,"languages":81,"stars":90,"forks":91,"last_commit_at":92,"license":93,"difficulty_score":32,"env_os":94,"env_gpu":95,"env_ram":96,"env_deps":97,"category_tags":105,"github_topics":107,"view_count":32,"oss_zip_url":76,"oss_zip_packed_at":76,"status":17,"created_at":117,"updated_at":118,"faqs":119,"releases":148},6535,"thoughtworksarts\u002FEmoPy","EmoPy","A deep neural net toolkit for emotion analysis via Facial Expression Recognition (FER)","EmoPy 是一款基于深度神经网络的 Python 开源工具包，专为面部表情识别（FER）设计，能够通过分析人脸图像自动预测人类的情绪分类。它致力于解决情绪分析领域长期依赖昂贵私有数据集和商业闭源算法的痛点，让开发者能够利用公开数据免费构建、研究并集成自己的情绪识别模型。\n\n这款工具特别适合人工智能开发者、学术研究人员以及希望探索情感计算应用的设计师使用。EmoPy 的核心亮点在于其模块化架构，内置了多种基于 Keras 和 TensorFlow 后端的神经网络结构，用户可以根据需求灵活切换不同架构以测试最佳性能。同时，它提供了预训练模型接口，让用户能快速上手运行。\n\n需要注意的是，由于坚持使用公开数据集（如 Microsoft FER+），EmoPy 在光线均匀、构图与训练数据风格相近的环境下表现最佳。虽然其在复杂现实场景中的精度可能不及拥有百万级私有数据的商业方案，但它为社区提供了一个透明、可复现且易于扩展的研究基准，极大地降低了情绪识别技术的入门门槛。","# EmoPy\nEmoPy is a python toolkit with deep neural net classes which predicts human emotional expression classifications given images of people's faces. The goal of this project is to explore the field of [Facial Expression Recognition (FER)](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEmotion_recognition) using existing public datasets, and make neural network models which are free, open, easy to research and easy integrate into other projects.\n\n![Labeled FER Images](readme_docs\u002Flabeled_images_7.png \"Labeled Facial Expression Images\")  \n*Figure from [@Chen2014FacialER]*\n\nThe behavior of the system is highly dependent on the available data, and the developers of EmoPy created and tested the system using only publicly-available datasets.\n\nTo get a better grounding in the project you may find these write-ups useful:\n* [Recognizing human facial expressions with machine learning](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Frecognizing-human-facial-expressions-machine-learning)\n* [EmoPy: a machine learning toolkit for emotional expression](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Femopy-machine-learning-toolkit-emotional-expression)\n\nWe aim to expand our development community, and we are open to suggestions and contributions. Usually these types of algorithms are used commercially, so we want to help open source the best possible version of them in order to improve public access and engagement in this area. Please contact an EmoPy maintainer (see below) to discuss.\n\n## Overview\n\nEmoPy includes several modules that are plugged together to build a trained FER prediction model.\n\n- `fermodel.py`\n- `neuralnets.py`\n- `dataset.py`\n- `data_loader.py`\n- `csv_data_loader.py`\n- `directory_data_loader.py`\n- `data_generator.py`\n\nThe `fermodel.py` module uses pre-trained models for FER prediction, making it the easiest entry point to get a trained model up and running quickly.\n\nEach of the modules contains one class, except for `neuralnets.py`, which has one interface and five subclasses. Each of these subclasses implements a different neural net architecture using the Keras framework with Tensorflow backend, allowing you to experiment and see which one performs best for your needs.\n\nThe [EmoPy documentation](https:\u002F\u002Femopy.readthedocs.io\u002F) contains detailed information on the classes and their interactions. Also, an overview of the different neural nets included in this project is included below.\n\n## Operating Constraints\n\nCommercial FER projects are regularly trained on millions of labeled images, in massive private datasets. By contrast, in order to remain free and open source, EmoPy was created to work with only public datasets, which presents a major constraint on training for accurate results.\n\nEmoPy was originally created and designed to fulfill the needs of the [RIOT project](https:\u002F\u002Fthoughtworksarts.io\u002Fprojects\u002Friot\u002F), in which audience members facial expressions are recorded in a controlled lighting environment.\n\nFor these two reasons, EmoPy functions best when the input image:\n\n* is evenly lit, with relatively few shadows, and\u002For\n* matches to some extent the style, framing and cropping of images from the training dataset\n\nAs of this writing, the best available public dataset we have found is [Microsoft FER+](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus), with around 30,000 images. Training on this dataset should yield best results when the input image relates to some extent to the style of the images in the set.\n\nFor a deeper analysis of the origin and operation of EmoPy, which will be useful to help evaluate its potential for your needs, please read our [full write-up on EmoPy](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Femopy-machine-learning-toolkit-emotional-expression).\n\n## Choosing a Dataset\n\nTry out the system using your own dataset or a small dataset we have provided in the [Emopy\u002Fexamples\u002Fimage_data](Emopy\u002Fexamples\u002Fimage_data) subdirectory. The sample datasets we provide will not yield good results due to their small size, but they serve as a great way to get started.\n\nPredictions ideally perform well on a diversity of datasets, illumination conditions, and subsets of the standard 7 emotion labels (happiness, anger, fear, surprise, disgust, sadness, calm\u002Fneutral) seen in FER research. Some good example public datasets are the [Extended Cohn-Kanade](http:\u002F\u002Fwww.consortium.ri.cmu.edu\u002Fckagree\u002F) and [Microsoft FER+](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus).\n\n## Environment Setup\n\nPython is compatible with multiple operating systems. If you would like to use EmoPy on another OS, please convert these instructions to match your target environment. Let us know how you get on, and we will try to support you and share you results.\n\nBefore beginning, if you do not have Homebrew installed run this command to install:\n\n```\n\u002Fusr\u002Fbin\u002Fruby -e \"$(curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FHomebrew\u002Finstall\u002Fmaster\u002Finstall)\"\n```\n\nEmoPy runs using Python 3.6 and up, theoretically on any Python-compatible OS. We tested EmoPy using Python 3.6.6 on OSX. \n\nThere are 2 ways you can install Python 3.6.6:\n\n1. Directly from the [Python website] (https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-366\u002F), or\n2. Using [pyenv] (https:\u002F\u002Fgithub.com\u002Fpyenv\u002Fpyenv):\n\n```\n$ brew install pyenv\n$ pyenv install 3.6.6\n``` \n\nGraphViz is required for visualisation functions.\n\n```\nbrew install graphviz\n```\n\nThe next step is to set up a virtual environment using virtualenv. Install virtualenv with sudo.\n```\nsudo pip install virtualenv\n```\n\nCreate and activate the virtual environment. Run:\n```\npython3.6 -m venv venv\n```\n\nOr if using pyenv:\n\n```\n$ pyenv exec python3.6 -m venv venv\n```\n\nWhere the second `venv` is the name of your virtual environment. To activate, run from the same directory:\n```\nsource venv\u002Fbin\u002Factivate\n```\nYour terminal command line should now be prefixed with ```(venv)```.\n\n(To deactivate the virtual environment run ```deactivate``` in the command line. You'll know it has been deactivated when the prefix ```(venv)``` disappears.)\n\n## Installation\n\n\n### From PyPi\nOnce the virtual environment is activated, you may install EmoPy using\n```\npip install EmoPy\n```\n\n### From the source\n\nClone the directory and open it in your terminal.\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy.git\ncd EmoPy\n```\n\nInstall the remaining dependencies using pip.\n\n```\npip install -r requirements.txt\n```\n\nNow you're ready to go!\n\n\n## Running tests\n\nYou can run the tests with:\n\n```\npython EmoPy\u002Ftests\u002Frun_all.py\n```\n\nWe encourage improvements and additions to these tests! \n\n\n## Running the examples\n\nYou can find example code to run each of the current neural net classes in [examples](EmoPy\u002Fexamples). You may either download the example directory to a location of your choice on your machine, or find the example directory included in the installation.\n\nIf you choose to use the installed package, you can find the examples directory by starting in the virtual environment directory you created and typing:\n```\ncd lib\u002Fpython3.6\u002Fsite-packages\u002FEmoPy\u002Fexamples\n```\n\n\nThe best place to start is the [FERModel example](EmoPy\u002Fexamples\u002Ffermodel_example.py). Here is a listing of that code:\n\n```python\nfrom EmoPy.src.fermodel import FERModel\nfrom pkg_resources import resource_filename\n\ntarget_emotions = ['calm', 'anger', 'happiness']\nmodel = FERModel(target_emotions, verbose=True)\n\nprint('Predicting on happy image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_happy_image.png'))\n\nprint('Predicting on disgust image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_disgust_image.png'))\n\nprint('Predicting on anger image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_anger_image2.png'))\n```\n\nThe code above loads a pre-trained model and then predicts an emotion on a sample image. As you can see, all you have to supply with this example is a set of target emotions and a sample image.\n\nOnce you have completed the installation, you can run this example from the examples folder by running the example script.\n\n```\npython fermodel_example.py\n```\n\nThe first thing the example does is load and initialize the model. Next it prints out emotion probabilities for each sample image its given. It should look like this:\n\n![FERModel Training Output](readme_docs\u002Fsample-fermodel-predictions.png \"FERModel Training Output\")\n\nTo train your own neural net, use one of our FER neural net classes to get started. You can try the convolutional_model.py example:\n\n```\npython convolutional_model.py\n``` \n\nThe example first initializes the model. A summary of the model architecture will be printed out. This includes a list of all the neural net layers and the shape of their output. Our models are built using the Keras framework, which offers this visualization function.\n\n![Convolutional Example Output Part 1](readme_docs\u002Fconvolutional_example_output1.png \"Convolutional Example Output Part 1\")\n\nYou will see the training and validation accuracies of the model being updated as it is trained on each sample image. The validation accuracy will be very low since we are only using three images for training and validation. It should look something like this:\n\n![Convolutional Example Output Part 2](readme_docs\u002Fconvolutional_example_output2.png \"Convolutional Example Output Part 2\")\n\n## Comparison of neural network models\n\n#### ConvolutionalNN\n\nConvolutional Neural Networks ([CNNs](https:\u002F\u002Fmedium.com\u002Ftechnologymadeeasy\u002Fthe-best-explanation-of-convolutional-neural-networks-on-the-internet-fbb8b1ad5df8)) are currently considered the go-to neural networks for Image Classification, because they pick up on patterns in small parts of an image, such as the curve of an eyebrow. EmoPy's ConvolutionalNN is trained on still images.\n\n#### TimeDelayConvNN\n\nThe Time-Delayed 3D-Convolutional Neural Network model is inspired by the work described in [this paper](http:\u002F\u002Fieeexplore.ieee.org\u002Fdocument\u002F7090979\u002F?part=1) written by Dr. Hongying Meng of Brunel University, London. It uses temporal information as part of its training samples. Instead of using still images as training samples, it uses past images from a series for additional context. One training sample will contain *n* number of images from a series and its emotion label will be that of the most recent image. The idea is to capture the progression of a facial expression leading up to a peak emotion.\n\n![Facial Expression Image Sequence](readme_docs\u002Fprogression-example.png \"Facial expression image sequence\")  \nFacial expression image sequence in Cohn-Kanade database from [@Jia2014]\n\n#### ConvolutionalLstmNN\n\nThe Convolutional Long Short Term Memory neural net is a convolutional and recurrent neural network hybrid. Convolutional NNs  use kernels, or filters, to find patterns in smaller parts of an image. Recurrent NNs ([RNNs](https:\u002F\u002Fdeeplearning4j.org\u002Flstm.html#recurrent)) take into account previous training examples, similar to the Time-Delay Neural Network, for context. This model is able to both extract local data from images and use temporal context.\n\nThe Time-Delay model and this model differ in how they use temporal context. The former only takes context from within video clips of a single face as shown in the figure above. The ConvolutionLstmNN is given still images that have no relation to each other. It looks for pattern differences between past image samples and the current sample as well as their labels. It isn’t necessary to have a progression of the same face, simply different faces to compare.\n\n![7 Standard Facial Expressions](readme_docs\u002Fseven-expression-examples.jpg \"7 Standard Facial Expressions\")  \nFigure from [@vanGent2016]\n\n#### TransferLearningNN\n\nThis model uses a technique known as [Transfer Learning](https:\u002F\u002Fwww.analyticsvidhya.com\u002Fblog\u002F2017\u002F06\u002Ftransfer-learning-the-art-of-fine-tuning-a-pre-trained-model\u002F), where pre-trained deep neural net models are used as starting points. The pre-trained models it uses are trained on images to classify objects. The model then retrains the pre-trained models using facial expression images with emotion classifications rather than object classifications. It adds a couple top layers to the original model to match the number of target emotions we want to classify and reruns the training algorithm with a set of facial expression images. It only uses still images, no temporal context.\n\n#### ConvolutionalNNDropout\n\nThis model is the most recent addition to EmoPy. It is a 2D Convolutional Neural Network that implements dropout, batch normalization, and L2 regularization. It is currently performing with a training accuracy of 0.7045 and a validation accuracy of 0.6536 when classifying 7 emotions. Further training will be done to determine how it performs on smaller subsets of emotions.\n\n## Performance\n\nBefore implementing the ConvolutionalNNDropout model, the ConvolutionalLstmNN model was performing best when classifying 7 emotions with a validation accuracy of 47.5%. The table below shows accuracy values of this model and the TransferLearningNN model when trained on all seven standard emotions and on a subset of three emotions (fear, happiness, neutral). They were trained on 5,000 images from the [FER+](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus) dataset.\n\n| Neural Net Model    | 7 emotions        |                     | 3 emotions        |                     |\n|---------------------|-------------------|---------------------|-------------------|---------------------|\n|                     | Training Accuracy | Validation Accuracy | Training Accuracy | Validation Accuracy |\n| ConvolutionalLstmNN | 0.6187            | 0.4751              | 0.9148            | 0.6267              |\n| TransferLearningNN  | 0.5358            | 0.2933              | 0.7393            | 0.4840              |\n\nBoth models are overfitting, meaning that training accuracies are much higher than validation accuracies. This means that the models are doing a really good job of recognizing and classifying patterns in the training images, but are overgeneralizing. They are less accurate when predicting emotions for new images.\n\nIf you would like to experiment with different parameters using our neural net classes, we recommend you use [FloydHub](https:\u002F\u002Fwww.floydhub.com\u002Fabout), a platform for training and deploying deep learning models in the cloud. Let us know how your models are doing! The goal is to optimize the performance and generalizability of all the FERPython models.\n\n## Guiding Principles\n\nThese are the principals we use to guide development and contributions to the project:\n\n- __FER for Good__. FER applications have the potential to be used for malicious purposes. We want to build EmoPy with a community that champions integrity, transparency, and awareness and hope to instill these values throughout development while maintaining an accessible, quality toolkit.\n\n- __User Friendliness.__ EmoPy prioritizes user experience and is designed to be as easy as possible to get an FER prediction model up and running by minimizing the total user requirements for basic use cases.\n\n- __Experimentation to Maximize Performance__. Optimal performance in FER prediction is a primary goal. The deep neural net classes are designed to easily modify training parameters, image pre-processing options, and feature extraction methods in the hopes that experimentation in the open-source community will lead to high-performing FER prediction.\n\n- __Modularity.__ EmoPy contains four base modules (`fermodel`, `neuralnets`, `imageprocessor`, and `featureextractor`) that can be easily used together with minimal restrictions.\n\n## Contributing\n\n1. Fork it!\n2. Create your feature branch: `git checkout -b my-new-feature`\n3. Commit your changes: `git commit -am 'Add some feature'`\n4. Push to the branch: `git push origin my-new-feature`\n5. Submit a pull request :D\n\nThis is a new library that has a lot of room for growth. Check out the list of open issues that we need help addressing!\n\n[@Chen2014FacialER]: https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FFacial-Expression-Recognition-Based-on-Facial-Comp-Chen-Chen\u002F677ebde61ba3936b805357e27fce06c44513a455 \"Facial Expression Recognition Based on Facial Components Detection and HOG Features\"\n\n[@Jia2014]: https:\u002F\u002Fwww.researchgate.net\u002Ffigure\u002FFig-2-Facial-expression-image-sequence-in-Cohn-Kanade-database_257627744_fig1 \"Head and facial gestures synthesis using PAD model for an expressive talking avatar\"\n\n[@vanGent2016]: http:\u002F\u002Fwww.paulvangent.com\u002F2016\u002F04\u002F01\u002Femotion-recognition-with-python-opencv-and-a-face-dataset\u002F \"Emotion Recognition With Python, OpenCV and a Face Dataset. A tech blog about fun things with Python and embedded electronics.\"\n\n## Contributors\n\nThanks goes to these wonderful people ([emoji key](https:\u002F\u002Fallcontributors.org\u002Fdocs\u002Fen\u002Femoji-key)):\n\n\u003C!-- ALL-CONTRIBUTORS-LIST:START - Do not remove or modify this section -->\n\u003C!-- prettier-ignore-start -->\n\u003C!-- markdownlint-disable -->\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fangelicaperez37\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_da7f75548950.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>angelicaperez37\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=angelicaperez37\" title=\"Code\">💻\u003C\u002Fa> \u003Ca href=\"#blog-angelicaperez37\" title=\"Blogposts\">📝\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=angelicaperez37\" title=\"Documentation\">📖\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsbriley\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_40e65166c5f1.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sbriley\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sbriley\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Ftania.pw\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_73bd094b3b45.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Sofia Tania\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=stania1\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fjahya.net\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_cc9431c9f7e2.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Andrew McWilliams\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=microcosm\" title=\"Documentation\">📖\u003C\u002Fa> \u003Ca href=\"#ideas-microcosm\" title=\"Ideas, Planning, & Feedback\">🤔\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.websonthewebs.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_12261761b5f6.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Webs\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=weberswords\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsaragw6\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_4d3fb2bc621d.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Sara GW\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=saragw6\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fmeganesu\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_1183db2de6d5.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Megan Sullivan\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=meganesu\" title=\"Documentation\">📖\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsadnantw\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_93cb7f642d16.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sadnantw\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sadnantw\" title=\"Code\">💻\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sadnantw\" title=\"Tests\">⚠️\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fxuv.be\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_9a35e6b2e7bc.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Julien Deswaef\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=xuv\" title=\"Code\">💻\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=xuv\" title=\"Documentation\">📖\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsinbycos\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_c85b1d465d01.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Tanushri Chakravorty\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sinbycos\" title=\"Code\">💻\u003C\u002Fa> \u003Ca href=\"#example-sinbycos\" title=\"Examples\">💡\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Flinas.org\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_0a9cac39b313.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Linas Vepštas\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#plugin-linas\" title=\"Plugin\u002Futility libraries\">🔌\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Femilysachs.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_f57862272842.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Emily Sachs\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=emilysachs\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdianagamedi\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_cb2c640f9577.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Diana Gamez\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dianagamedi\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdtoakley\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_08075eb7c409.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>dtoakley\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dtoakley\" title=\"Documentation\">📖\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dtoakley\" title=\"Code\">💻\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fanjutiwari\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_9465cb50d789.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Anju\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#maintenance-anjutiwari\" title=\"Maintenance\">🚧\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsatishdash\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_70839c557881.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Satish Dash\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#maintenance-satishdash\" title=\"Maintenance\">🚧\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003C!-- markdownlint-enable -->\n\u003C!-- prettier-ignore-end -->\n\u003C!-- ALL-CONTRIBUTORS-LIST:END -->\n\nThis project follows the [all-contributors](https:\u002F\u002Fgithub.com\u002Fall-contributors\u002Fall-contributors) specification. Contributions of any kind welcome!\n\n## Projects built on EmoPy\n- [RIOT AI](http:\u002F\u002Fkarenpalmer.uk\u002Fportfolio\u002Friot\u002F)\n- [ROS wrapper for EmoPy](https:\u002F\u002Fgithub.com\u002Fhansonrobotics\u002Fros_emopy)\n\nWant to list you project here? Please file an [issue](issues\u002Fnew) (or pull request) and tell us how EmoPy is helping you.\n","# EmoPy\nEmoPy 是一个基于 Python 的工具包，包含深度神经网络类，能够根据人脸图像预测人类的情感表达类别。该项目的目标是利用现有的公开数据集探索 [面部表情识别 (FER)](https:\u002F\u002Fen.wikipedia.org\u002Fwiki\u002FEmotion_recognition) 领域，并构建免费、开源、易于研究且便于集成到其他项目中的神经网络模型。\n\n![带标签的 FER 图像](readme_docs\u002Flabeled_images_7.png \"带标签的面部表情图像\")  \n*源自 [@Chen2014FacialER]*\n\n系统的性能高度依赖于可用的数据，而 EmoPy 的开发者仅使用公开可用的数据集来创建和测试该系统。\n\n为了更好地了解该项目，您可以参考以下文章：\n* [使用机器学习识别人类面部表情](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Frecognizing-human-facial-expressions-machine-learning)\n* [EmoPy：用于情感表达的机器学习工具包](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Femopy-machine-learning-toolkit-emotional-expression)\n\n我们希望扩大开发社区，并欢迎各种建议和贡献。通常这类算法多用于商业场景，因此我们致力于将其以最佳开源形式呈现，从而提升公众在这一领域的参与度和可及性。如需讨论，请联系 EmoPy 的维护者（见下文）。\n\n## 概述\n\nEmoPy 包含多个模块，这些模块相互连接以构建经过训练的 FER 预测模型。\n\n- `fermodel.py`\n- `neuralnets.py`\n- `dataset.py`\n- `data_loader.py`\n- `csv_data_loader.py`\n- `directory_data_loader.py`\n- `data_generator.py`\n\n其中，`fermodel.py` 模块使用预先训练好的模型进行 FER 预测，使其成为快速启动并运行训练模型的最简单入口。\n\n除 `neuralnets.py` 外，每个模块都包含一个类；而 `neuralnets.py` 则包含一个接口和五个子类。这些子类分别实现了不同的神经网络架构，基于 Keras 框架与 TensorFlow 后端，允许您进行实验并找到最适合自身需求的模型。\n\n[EmoPy 文档](https:\u002F\u002Femopy.readthedocs.io\u002F) 提供了关于各个类及其交互的详细信息。此外，下方还列出了本项目中包含的不同神经网络概述。\n\n## 运行限制\n\n商业化的 FER 项目通常会在数百万张带标签的图像上进行训练，这些数据往往来自庞大的私有数据集。相比之下，为了保持免费和开源，EmoPy 只能使用公开数据集进行训练，这对其获得准确结果构成了重大限制。\n\nEmoPy 最初是为满足 [RIOT 项目](https:\u002F\u002Fthoughtworksarts.io\u002Fprojects\u002Friot\u002F) 的需求而设计的，在该项目中，观众的面部表情是在受控光照条件下被记录的。\n\n基于以上两点，EmoPy 在输入图像满足以下条件时效果最佳：\n\n* 光线均匀，阴影较少；或\n* 图像风格、构图和裁剪与训练数据集中的图像有一定相似性。\n\n截至目前，我们所找到的最佳公开数据集是 [Microsoft FER+](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus)，约包含 3 万张图像。使用该数据集进行训练时，若输入图像与数据集中图像的风格较为接近，则效果更佳。\n\n如需深入了解 EmoPy 的起源与运作方式，以便评估其对您需求的适用性，请阅读我们的 [EmoPy 完整介绍](https:\u002F\u002Fwww.thoughtworks.com\u002Finsights\u002Fblog\u002Femopy-machine-learning-toolkit-emotional-expression)。\n\n## 数据集选择\n\n您可以尝试使用自己的数据集，或使用我们在 [Emopy\u002Fexamples\u002Fimage_data](Emopy\u002Fexamples\u002Fimage_data) 子目录中提供的小型数据集来测试系统。虽然我们提供的示例数据集由于规模较小，可能无法产生理想效果，但它们却是入门的好方法。\n\n理想的预测效果应在多种数据集、光照条件下，以及 FER 研究中常见的 7 种情绪标签（快乐、愤怒、恐惧、惊讶、厌恶、悲伤、平静\u002F neutral）的子集上表现良好。一些优秀的公开数据集包括 [Extended Cohn-Kanade](http:\u002F\u002Fwww.consortium.ri.cmu.edu\u002Fckagree\u002F) 和 [Microsoft FER+](https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus)。\n\n## 环境设置\n\nPython 兼容多种操作系统。如果您希望在其他操作系统上使用 EmoPy，请将以下说明调整为适合您的环境。请随时告知我们您的进展，我们将尽力为您提供支持并分享您的成果。\n\n开始之前，如果尚未安装 Homebrew，请运行以下命令进行安装：\n\n```\n\u002Fusr\u002Fbin\u002Fruby -e \"$(curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FHomebrew\u002Finstall\u002Fmaster\u002Finstall)\"\n```\n\nEmoPy 使用 Python 3.6 及以上版本运行，理论上可在任何兼容 Python 的操作系统上运行。我们已在 OSX 上使用 Python 3.6.6 对 EmoPy 进行了测试。\n\n您可以通过以下两种方式安装 Python 3.6.6：\n\n1. 直接从 [Python 官网](https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Frelease\u002Fpython-366\u002F) 下载，或\n2. 使用 [pyenv](https:\u002F\u002Fgithub.com\u002Fpyenv\u002Fpyenv)：\n\n```\n$ brew install pyenv\n$ pyenv install 3.6.6\n``` \n\nGraphViz 是可视化功能所必需的工具。\n\n```\nbrew install graphviz\n```\n\n下一步是使用 virtualenv 设置虚拟环境。请以 sudo 权限安装 virtualenv。\n\n```\nsudo pip install virtualenv\n```\n\n创建并激活虚拟环境。运行：\n\n```\npython3.6 -m venv venv\n```\n\n或者，如果您使用 pyenv：\n\n```\n$ pyenv exec python3.6 -m venv venv\n```\n\n其中第二个 `venv` 是您的虚拟环境名称。要激活它，请在同一目录下运行：\n\n```\nsource venv\u002Fbin\u002Factivate\n```\n\n此时，您的终端命令行前应显示 `(venv)` 前缀。\n\n（要停用虚拟环境，请在命令行中输入 `deactivate`。当 `(venv)` 前缀消失时，即表示已成功停用。）\n\n## 安装\n\n\n### 从 PyPI 安装\n虚拟环境激活后，您可以通过以下命令安装 EmoPy：\n\n```\npip install EmoPy\n```\n\n### 从源代码安装\n\n克隆仓库并在终端中打开。\n\n```\ngit clone https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy.git\ncd EmoPy\n```\n\n使用 pip 安装剩余的依赖项。\n\n```\npip install -r requirements.txt\n```\n\n现在您已经准备就绪！\n\n## 运行测试\n\n您可以运行以下命令来执行测试：\n\n```\npython EmoPy\u002Ftests\u002Frun_all.py\n```\n\n我们鼓励大家改进和补充这些测试！\n\n## 运行示例\n\n您可以在 [examples](EmoPy\u002Fexamples) 目录中找到当前每种神经网络类别的示例代码。您可以将示例目录下载到本地的任意位置，也可以直接使用安装包中包含的示例目录。\n\n如果您选择使用已安装的包，可以从您创建的虚拟环境目录开始，输入以下命令来找到示例目录：\n```\ncd lib\u002Fpython3.6\u002Fsite-packages\u002FEmoPy\u002Fexamples\n```\n\n\n最佳的入门示例是 [FERModel 示例](EmoPy\u002Fexamples\u002Ffermodel_example.py)。以下是该示例代码：\n\n```python\nfrom EmoPy.src.fermodel import FERModel\nfrom pkg_resources import resource_filename\n\ntarget_emotions = ['calm', 'anger', 'happiness']\nmodel = FERModel(target_emotions, verbose=True)\n\nprint('Predicting on happy image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_happy_image.png'))\n\nprint('Predicting on disgust image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_disgust_image.png'))\n\nprint('Predicting on anger image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_anger_image2.png'))\n```\n\n上述代码加载了一个预训练模型，并对一张示例图像进行情绪预测。正如您所见，这个示例只需要提供一组目标情绪和一张示例图像即可。\n\n完成安装后，您可以在 examples 文件夹中通过运行示例脚本运行此示例：\n\n```\npython fermodel_example.py\n```\n\n示例首先加载并初始化模型。接着，它会打印出给定每张示例图像的情绪概率分布。输出应如下所示：\n\n![FERModel 训练输出](readme_docs\u002Fsample-fermodel-predictions.png \"FERModel 训练输出\")\n\n要训练您自己的神经网络，可以使用我们提供的 FER 神经网络类之一作为起点。例如，您可以尝试 convolutional_model.py 示例：\n\n```\npython convolutional_model.py\n``` \n\n该示例首先初始化模型。随后会打印出模型架构的摘要信息，包括所有神经网络层及其输出形状的列表。我们的模型基于 Keras 框架构建，该框架提供了这一可视化功能。\n\n![卷积示例输出第一部分](readme_docs\u002Fconvolutional_example_output1.png \"卷积示例输出第一部分\")\n\n在模型对每张示例图像进行训练的过程中，您会看到训练准确率和验证准确率不断更新。由于我们仅使用三张图像进行训练和验证，验证准确率会相对较低。输出应类似于以下内容：\n\n![卷积示例输出第二部分](readme_docs\u002Fconvolutional_example_output2.png \"卷积示例输出第二部分\")\n\n## 神经网络模型比较\n\n#### 卷积神经网络 (ConvolutionalNN)\n\n卷积神经网络（CNNs）目前被认为是图像分类领域的首选模型，因为它们能够捕捉图像中小区域内的模式，比如眉毛的弧度。EmoPy 中的 ConvolutionalNN 是基于静态图像进行训练的。\n\n#### 时延三维卷积神经网络 (TimeDelayConvNN)\n\n时延三维卷积神经网络模型的灵感来源于伦敦布鲁内尔大学孟红英博士撰写的一篇论文[此处链接]。该模型在训练样本中引入了时间信息。与使用静态图像不同，它会利用序列中的历史图像来提供额外的上下文信息。每个训练样本将包含来自同一序列的 *n* 张图像，其情绪标签则取自最新的一张图像。这样做的目的是捕捉面部表情从初始状态逐渐发展至峰值情绪的过程。\n\n![面部表情图像序列](readme_docs\u002Fprogression-example.png \"面部表情图像序列\")  \nCohn-Kanade 数据库中的面部表情图像序列，来源：[@Jia2014]\n\n#### 卷积长短期记忆神经网络 (ConvolutionalLstmNN)\n\n卷积长短期记忆神经网络是一种卷积神经网络与循环神经网络的混合模型。卷积神经网络通过卷积核或滤波器在图像的小区域内寻找模式；而循环神经网络（RNNs）则会考虑先前的训练样本，类似于时延神经网络，以获取上下文信息。因此，该模型既能提取图像中的局部特征，又能利用时间上下文。\n\n时延模型与该模型在如何利用时间上下文方面有所不同。前者仅从单个面部的视频片段中获取上下文信息，如上图所示。而 ConvolutionalLstmNN 则接收彼此无关的静态图像，通过比较过去样本与当前样本之间的模式差异及其对应的情绪标签来进行学习。它不需要连续的同一面部表情序列，只需不同的人脸图像进行对比即可。\n\n![7 种标准面部表情](readme_docs\u002Fseven-expression-examples.jpg \"7 种标准面部表情\")  \n来源：[@vanGent2016]\n\n#### 迁移学习神经网络 (TransferLearningNN)\n\n该模型采用一种称为迁移学习的技术[此处链接]，即以预训练的深度神经网络模型作为起点。这些预训练模型原本是用于对物体进行分类的。随后，该模型会使用带有情绪标签的面部表情图像重新训练这些预训练模型，而不是继续进行物体分类。为了匹配我们要分类的目标情绪数量，它会在原始模型的基础上添加几层顶层，并使用一组面部表情图像再次运行训练算法。该模型仅使用静态图像，不涉及时间上下文。\n\n#### 卷积神经网络 Dropout 版本 (ConvolutionalNNDropout)\n\n该模型是 EmoPy 中最新的成员。它是一种二维卷积神经网络，实现了 dropout、批量归一化和 L2 正则化技术。目前，在对 7 种情绪进行分类时，其训练准确率为 0.7045，验证准确率为 0.6536。后续还将进一步训练，以评估其在更小的情绪子集上的表现。\n\n## 性能\n\n在实现 ConvolutionalNNDropout 模型之前，ConvolutionalLstmNN 模型在对7种情绪进行分类时表现最佳，验证集准确率为47.5%。下表展示了该模型和 TransferLearningNN 模型分别在所有七种标准情绪以及三种情绪子集（恐惧、快乐、中性）上训练时的准确率。这两个模型均基于 [FER+]（https:\u002F\u002Fgithub.com\u002FMicrosoft\u002FFERPlus）数据集中的5,000张图像进行训练。\n\n| 神经网络模型        | 7种情绪        |                     | 3种情绪        |                     |\n|---------------------|-------------------|---------------------|-------------------|---------------------|\n|                     | 训练集准确率     | 验证集准确率       | 训练集准确率     | 验证集准确率       |\n| ConvolutionalLstmNN | 0.6187            | 0.4751              | 0.9148            | 0.6267              |\n| TransferLearningNN  | 0.5358            | 0.2933              | 0.7393            | 0.4840              |\n\n两种模型都存在过拟合现象，即训练集准确率远高于验证集准确率。这表明模型在识别和分类训练图像中的模式方面表现优异，但在泛化能力上有所不足，导致对新图像的情绪预测准确性较低。\n\n如果您希望使用我们的神经网络类尝试不同的参数设置，我们推荐您使用 [FloydHub]（https:\u002F\u002Fwww.floydhub.com\u002Fabout），这是一个用于在云端训练和部署深度学习模型的平台。请随时告诉我们您的模型表现如何！我们的目标是优化所有 FERPython 模型的性能和泛化能力。\n\n## 指导原则\n\n以下是我们指导项目开发及贡献所遵循的原则：\n\n- __FER向善__。FER应用有可能被用于恶意目的。我们希望建立一个倡导诚信、透明与意识的社区来共同开发 EmoPy，并在开发过程中始终秉持这些价值观，同时提供一个易于使用且高质量的工具包。\n\n- __用户友好性__。EmoPy 将用户体验放在首位，旨在通过尽量减少基本用例所需的用户操作步骤，使用户能够轻松地搭建并运行 FER 预测模型。\n\n- __以实验提升性能__。实现 FER 预测的最佳性能是我们的首要目标。深度神经网络类的设计允许用户便捷地调整训练参数、图像预处理选项以及特征提取方法，期望通过开源社区的广泛实验，推动 FER 预测性能的不断提升。\n\n- __模块化__。EmoPy 包含四个基础模块（`fermodel`、`neuralnets`、`imageprocessor` 和 `featureextractor`），这些模块可以灵活组合使用，且限制较少。\n\n## 贡献方式\n\n1. 克隆仓库！\n2. 创建你的功能分支：`git checkout -b my-new-feature`\n3. 提交更改：`git commit -am '添加新功能'`\n4. 推送到分支：`git push origin my-new-feature`\n5. 发起拉取请求 :D\n\n这是一个尚处于起步阶段的新库，未来还有很大的发展空间。请查看我们列出的待解决事项，欢迎大家一起参与贡献！\n\n[@Chen2014FacialER]: https:\u002F\u002Fwww.semanticscholar.org\u002Fpaper\u002FFacial-Expression-Recognition-Based-on-Facial-Comp-Chen-Chen\u002F677ebde61ba3936b805357e27fce06c44513a455 “基于面部组件检测与 HOG 特征的人脸表情识别”\n\n[@Jia2014]: https:\u002F\u002Fwww.researchgate.net\u002Ffigure\u002FFig-2-Facial-expression-image-sequence-in-Cohn-Kanade-database_257627744_fig1 “利用 PAD 模型为富有表现力的虚拟人物合成头部及面部动作”\n\n[@vanGent2016]: http:\u002F\u002Fwww.paulvangent.com\u002F2016\u002F04\u002F01\u002Femotion-recognition-with-python-opencv-and-a-face-dataset\u002F “使用 Python、OpenCV 和人脸数据集进行情绪识别。一篇关于 Python 与嵌入式电子技术的趣味技术博客。”\n\n## 贡献者\n\n感谢以下各位优秀的朋友（[表情键](https:\u002F\u002Fallcontributors.org\u002Fdocs\u002Fen\u002Femoji-key)）：\n\n\u003C!-- ALL-CONTRIBUTORS-LIST:START - 请勿删除或修改此部分 -->\n\u003C!-- prettier-ignore-start -->\n\u003C!-- markdownlint-disable -->\n\u003Ctable>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fangelicaperez37\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_da7f75548950.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>angelicaperez37\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=angelicaperez37\" title=\"代码\">💻\u003C\u002Fa> \u003Ca href=\"#blog-angelicaperez37\" title=\"博客文章\">📝\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=angelicaperez37\" title=\"文档\">📖\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsbriley\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_40e65166c5f1.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sbriley\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sbriley\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Ftania.pw\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_73bd094b3b45.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Sofia Tania\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=stania1\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fjahya.net\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_cc9431c9f7e2.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Andrew McWilliams\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=microcosm\" title=\"文档\">📖\u003C\u002Fa> \u003Ca href=\"#ideas-microcosm\" title=\"想法、规划与反馈\">🤔\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.websonthewebs.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_12261761b5f6.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Webs\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=weberswords\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsaragw6\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_4d3fb2bc621d.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Sara GW\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=saragw6\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fwww.linkedin.com\u002Fin\u002Fmeganesu\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_1183db2de6d5.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Megan Sullivan\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=meganesu\" title=\"文档\">📖\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsadnantw\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_93cb7f642d16.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>sadnantw\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sadnantw\" title=\"代码\">💻\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sadnantw\" title=\"测试\">⚠️\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Fxuv.be\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_9a35e6b2e7bc.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Julien Deswaef\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=xuv\" title=\"代码\">💻\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=xuv\" title=\"文档\">📖\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsinbycos\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_c85b1d465d01.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Tanushri Chakravorty\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=sinbycos\" title=\"代码\">💻\u003C\u002Fa> \u003Ca href=\"#example-sinbycos\" title=\"示例\">💡\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"http:\u002F\u002Flinas.org\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_0a9cac39b313.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Linas Vepštas\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#plugin-linas\" title=\"插件\u002F工具库\">🔌\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Femilysachs.com\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_f57862272842.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Emily Sachs\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=emilysachs\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdianagamedi\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_cb2c640f9577.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Diana Gamez\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dianagamedi\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fdtoakley\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_08075eb7c409.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>dtoakley\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dtoakley\" title=\"文档\">📖\u003C\u002Fa> \u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fcommits?author=dtoakley\" title=\"代码\">💻\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n  \u003Ctr>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fanjutiwari\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_9465cb50d789.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Anju\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#maintenance-anjutiwari\" title=\"维护\">🚧\u003C\u002Fa>\u003C\u002Ftd>\n    \u003Ctd align=\"center\">\u003Ca href=\"https:\u002F\u002Fgithub.com\u002Fsatishdash\">\u003Cimg src=\"https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_readme_70839c557881.png\" width=\"100px;\" alt=\"\"\u002F>\u003Cbr \u002F>\u003Csub>\u003Cb>Satish Dash\u003C\u002Fb>\u003C\u002Fsub>\u003C\u002Fa>\u003Cbr \u002F>\u003Ca href=\"#maintenance-satishdash\" title=\"维护\">🚧\u003C\u002Fa>\u003C\u002Ftd>\n  \u003C\u002Ftr>\n\u003C\u002Ftable>\n\n\u003C!-- markdownlint-enable -->\n\u003C!-- prettier-ignore-end -->\n\u003C!-- ALL-CONTRIBUTORS-LIST:END -->\n\n本项目遵循 [all-contributors](https:\u002F\u002Fgithub.com\u002Fall-contributors\u002Fall-contributors) 规范。欢迎任何形式的贡献！\n\n## 基于 EmoPy 构建的项目\n- [RIOT AI](http:\u002F\u002Fkarenpalmer.uk\u002Fportfolio\u002Friot\u002F)\n- [EmoPy 的 ROS 封装](https:\u002F\u002Fgithub.com\u002Fhansonrobotics\u002Fros_emopy)\n\n希望在此列出您的项目吗？请提交一个 [issue](issues\u002Fnew)（或 pull request），并告诉我们 EmoPy 是如何帮助您的。","# EmoPy 快速上手指南\n\nEmoPy 是一个基于 Python 和深度神经网络的面部表情识别（FER）工具包。它利用公开数据集训练模型，能够根据人脸图像预测人类的情绪表达（如快乐、愤怒、悲伤等）。本项目旨在提供免费、开源且易于集成的神经网络模型。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**：支持 macOS、Linux 或 Windows（本文档以 macOS\u002FLinux 为例，Windows 用户请相应调整命令）。\n*   **Python 版本**：Python 3.6 或更高版本（推荐 3.6.6+）。\n*   **前置依赖**：\n    *   **GraphViz**：用于模型可视化功能。\n    *   **virtualenv**：用于创建独立的 Python 虚拟环境。\n\n### 安装系统依赖 (macOS 示例)\n\n如果您使用 macOS 且未安装 Homebrew，请先安装：\n```bash\n\u002Fusr\u002Fbin\u002Fruby -e \"$(curl -fsSL https:\u002F\u002Fraw.githubusercontent.com\u002FHomebrew\u002Finstall\u002Fmaster\u002Finstall)\"\n```\n\n安装 Python 管理工具 pyenv（可选，推荐）和 GraphViz：\n```bash\nbrew install pyenv\nbrew install graphviz\n```\n\n安装 virtualenv：\n```bash\nsudo pip install virtualenv\n```\n\n### 设置虚拟环境\n\n建议创建一个虚拟环境以避免依赖冲突：\n\n```bash\n# 创建虚拟环境 (名为 venv)\npython3.6 -m venv venv\n# 如果使用 pyenv\n# pyenv exec python3.6 -m venv venv\n\n# 激活虚拟环境\nsource venv\u002Fbin\u002Factivate\n```\n激活成功后，终端命令行前缀应显示 `(venv)`。\n\n> **国内开发者提示**：在安装 Python 包时，建议使用国内镜像源（如清华源或阿里源）以加速下载。例如：`pip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple \u003C包名>`。\n\n## 安装步骤\n\n您可以通过 PyPI 直接安装，也可以从源码安装。\n\n### 方式一：通过 PyPI 安装（推荐）\n\n在激活的虚拟环境中运行：\n```bash\npip install EmoPy\n```\n*(国内加速版)*\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple EmoPy\n```\n\n### 方式二：从源码安装\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy.git\ncd EmoPy\npip install -r requirements.txt\n```\n*(国内加速版)*\n```bash\npip install -i https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple -r requirements.txt\n```\n\n## 基本使用\n\nEmoPy 提供了预训练的模型，最简单的入口是使用 `FERModel` 类。以下示例展示了如何加载预训练模型并对样本图片进行情绪预测。\n\n### 代码示例\n\n创建一个名为 `quick_start.py` 的文件，写入以下代码：\n\n```python\nfrom EmoPy.src.fermodel import FERModel\nfrom pkg_resources import resource_filename\n\n# 定义需要预测的目标情绪类别\n# 支持的类别包括：'happiness', 'anger', 'fear', 'surprise', 'disgust', 'sadness', 'calm'\ntarget_emotions = ['calm', 'anger', 'happiness']\n\n# 初始化模型，verbose=True 会输出详细日志\nmodel = FERModel(target_emotions, verbose=True)\n\nprint('Predicting on happy image...')\n# 预测示例图片（此处使用包内自带的示例图片路径）\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_happy_image.png'))\n\nprint('Predicting on disgust image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_disgust_image.png'))\n\nprint('Predicting on anger image...')\nmodel.predict(resource_filename('EmoPy.examples','image_data\u002Fsample_anger_image2.png'))\n```\n\n### 运行结果\n\n在终端运行脚本：\n```bash\npython quick_start.py\n```\n\n程序将加载预训练模型，并输出每张输入图片对应各情绪类别的概率分布。\n\n> **注意**：EmoPy 主要基于公开数据集（如 Microsoft FER+）训练，因此在光线均匀、阴影较少且人脸构图与训练集相似的图片上表现最佳。对于复杂光照或非标准角度的人脸，识别准确率可能会受到影响。","某在线教育平台希望实时分析网课中学生的面部表情，以评估课堂专注度并优化教学内容。\n\n### 没有 EmoPy 时\n- 开发团队需从零收集百万级标注人脸数据并训练深度学习模型，耗时数月且成本高昂。\n- 缺乏现成的开源架构参考，工程师必须手动编写复杂的神经网络代码，技术门槛极高。\n- 难以快速验证想法，任何算法调整都需要重新经历漫长的数据清洗与模型训练周期。\n- 商业闭源方案授权费用昂贵，且无法根据特定教学场景进行定制化修改。\n\n### 使用 EmoPy 后\n- 直接调用预训练的 FER 模型接口，仅需几行 Python 代码即可在数小时内完成部署。\n- 利用内置的多种 Keras\u002FTensorFlow 神经网络架构，轻松对比不同模型在当前光线环境下的表现。\n- 基于公开的 Microsoft FER+ 数据集快速迭代，针对教室均匀光照场景微调参数，显著缩短研发周期。\n- 完全开源免费的特性允许团队深入修改底层逻辑，将表情识别无缝集成到现有的直播流处理管道中。\n\nEmoPy 通过提供开箱即用的深度神经网络工具包，让中小团队也能低成本、高效率地落地人脸情绪识别应用。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fthoughtworksarts_EmoPy_6f0360a2.png","thoughtworksarts","Thoughtworks Arts","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fthoughtworksarts_485e3e83.png","Global research lab incubating collaborations between artists and technologists, investigating impacts of emerging technologies on industry, culture & society",null,"info@thoughtworksarts.io","tw_arts","https:\u002F\u002Fthoughtworksarts.io","https:\u002F\u002Fgithub.com\u002Fthoughtworksarts",[82,86],{"name":83,"color":84,"percentage":85},"Python","#3572A5",99.4,{"name":87,"color":88,"percentage":89},"Makefile","#427819",0.6,968,266,"2026-04-09T02:32:59","AGPL-3.0","macOS, Linux, Windows","未说明 (基于 Keras\u002FTensorFlow 后端，通常支持 CPU 或 GPU，但 README 未明确指定显卡型号或 CUDA 版本)","未说明",{"notes":98,"python":99,"dependencies":100},"该工具最初在 macOS (OSX) 上使用 Python 3.6.6 进行测试。需要安装 GraphViz 以支持可视化功能。建议使用虚拟环境 (venv 或 pyenv) 进行隔离。由于仅使用公开数据集训练，模型在光线均匀、阴影较少且构图与训练集（如 Microsoft FER+）相似的图像上表现最佳。","3.6+",[101,102,103,104],"Keras","TensorFlow","GraphViz","virtualenv",[14,15,106],"视频",[108,109,110,111,112,113,114,115,116],"deep-neural-networks","facial-expression-recognition","emotion","face","neural-nets","emotion-recognition","python","fer","neural-network","2026-03-27T02:49:30.150509","2026-04-11T15:12:01.910802",[120,125,130,135,140,144],{"id":121,"question_zh":122,"answer_zh":123,"source_url":124},29554,"运行测试或示例时出现 'ModuleNotFoundError: No module named EmoPy' 错误怎么办？","这通常是因为依赖项未正确安装或 Python 版本不兼容。请确保使用 Python 3.6.6，并在虚拟环境中通过命令 `pip install -r requirements.txt` 安装所有依赖。此外，建议在尝试前拉取最新的代码变更，因为维护者可能已更新了如 scikit-image 等库的版本。","https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fissues\u002F42",{"id":126,"question_zh":127,"answer_zh":128,"source_url":129},29555,"加载模型时报错 'OSError: Unable to open file (file signature not found)' 如何解决？","该错误通常是因为模型文件（大文件）未正确下载，原因是仓库使用了 Git LFS 但未安装或未拉取。解决方法是：1. 安装 Git LFS（Mac 用户可用 `brew install git-lfs`，Linux 用户可用 `sudo apt-get install git-lfs`）；2. 运行 `git lfs install`；3. 运行 `git lfs pull` 拉取大文件。如果问题依旧，建议删除仓库重新克隆。","https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fissues\u002F32",{"id":131,"question_zh":132,"answer_zh":133,"source_url":134},29556,"遇到 'numpy.core._multiarray_umath failed to import' 或 TensorFlow 相关崩溃错误怎么办？","这通常是 NumPy 版本冲突或损坏导致的。首先尝试更新 NumPy 包。如果问题仍然存在，请确保你下载并使用的是最新版本的 EmoPy 代码，因为旧版本可能存在兼容性问题。另外，请确保在终端中按照项目层级结构运行脚本（例如进入 examples 目录后运行），而不是直接运行根目录下的初始化文件。","https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fissues\u002F23",{"id":136,"question_zh":137,"answer_zh":138,"source_url":139},29557,"运行单元测试时出现 'ImportError: Import by filename is not supported' 错误？","这个错误通常是因为使用了错误的 Python 版本（如 Python 2.7）或测试命令路径不正确。EmoPy 需要 Python 3 环境。请确保激活了包含 Python 3 的虚拟环境，并检查运行命令的路径是否正确指向了测试模块。","https:\u002F\u002Fgithub.com\u002Fthoughtworksarts\u002FEmoPy\u002Fissues\u002F47",{"id":141,"question_zh":142,"answer_zh":143,"source_url":124},29558,"TensorFlow 1 和 2 之间的版本差异导致运行失败怎么办？","TensorFlow 1 和 2 之间存在破坏性变更，必须使用正确的版本才能运行项目。请严格按照项目 `requirements.txt` 文件中指定的版本安装依赖，不要随意升级 TensorFlow 或 Keras 到最新版本，除非项目明确支持。建议使用 Python 3.6.6 配合项目锁定的依赖版本。",{"id":145,"question_zh":146,"answer_zh":147,"source_url":129},29559,"如何自定义训练情感子集（例如只训练愤怒、恐惧、悲伤等）？","虽然用户常询问如何自定义训练情感子集，但在当前的 `master` 分支中可能尚未完全开放此功能的简易接口。维护者曾提到有一个临时的 `no-file` 分支包含相关功能，正在合并中。建议查看最新文档或切换到特定分支尝试，或者在 `fermodel_example.py` 中修改 `target_emotions` 列表来测试支持的情感组合（需确保对应的模型文件已通过 Git LFS 正确下载）。",[149],{"id":150,"version":151,"summary_zh":152,"released_at":153},206056,"v0.0.5","本次发布整合了以下更改：\n- 添加了一个网络摄像头示例（@sinbycos）\n- 已在 Python 3.7 上测试（@xuv）","2019-02-18T19:49:42"]