[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-pliang279--MultiBench":3,"tool-pliang279--MultiBench":61},[4,18,26,36,44,53],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":17},4358,"openclaw","openclaw\u002Fopenclaw","OpenClaw 是一款专为个人打造的本地化 AI 助手，旨在让你在自己的设备上拥有完全可控的智能伙伴。它打破了传统 AI 助手局限于特定网页或应用的束缚，能够直接接入你日常使用的各类通讯渠道，包括微信、WhatsApp、Telegram、Discord、iMessage 等数十种平台。无论你在哪个聊天软件中发送消息，OpenClaw 都能即时响应，甚至支持在 macOS、iOS 和 Android 设备上进行语音交互，并提供实时的画布渲染功能供你操控。\n\n这款工具主要解决了用户对数据隐私、响应速度以及“始终在线”体验的需求。通过将 AI 部署在本地，用户无需依赖云端服务即可享受快速、私密的智能辅助，真正实现了“你的数据，你做主”。其独特的技术亮点在于强大的网关架构，将控制平面与核心助手分离，确保跨平台通信的流畅性与扩展性。\n\nOpenClaw 非常适合希望构建个性化工作流的技术爱好者、开发者，以及注重隐私保护且不愿被单一生态绑定的普通用户。只要具备基础的终端操作能力（支持 macOS、Linux 及 Windows WSL2），即可通过简单的命令行引导完成部署。如果你渴望拥有一个懂你",349277,3,"2026-04-06T06:32:30",[13,14,15,16],"Agent","开发框架","图像","数据工具","ready",{"id":19,"name":20,"github_repo":21,"description_zh":22,"stars":23,"difficulty_score":10,"last_commit_at":24,"category_tags":25,"status":17},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,"2026-04-05T11:01:52",[14,15,13],{"id":27,"name":28,"github_repo":29,"description_zh":30,"stars":31,"difficulty_score":32,"last_commit_at":33,"category_tags":34,"status":17},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",157379,2,"2026-04-15T23:32:42",[14,13,35],"语言模型",{"id":37,"name":38,"github_repo":39,"description_zh":40,"stars":41,"difficulty_score":32,"last_commit_at":42,"category_tags":43,"status":17},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",108322,"2026-04-10T11:39:34",[14,15,13],{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":32,"last_commit_at":50,"category_tags":51,"status":17},6121,"gemini-cli","google-gemini\u002Fgemini-cli","gemini-cli 是一款由谷歌推出的开源 AI 命令行工具，它将强大的 Gemini 大模型能力直接集成到用户的终端环境中。对于习惯在命令行工作的开发者而言，它提供了一条从输入提示词到获取模型响应的最短路径，无需切换窗口即可享受智能辅助。\n\n这款工具主要解决了开发过程中频繁上下文切换的痛点，让用户能在熟悉的终端界面内直接完成代码理解、生成、调试以及自动化运维任务。无论是查询大型代码库、根据草图生成应用，还是执行复杂的 Git 操作，gemini-cli 都能通过自然语言指令高效处理。\n\n它特别适合广大软件工程师、DevOps 人员及技术研究人员使用。其核心亮点包括支持高达 100 万 token 的超长上下文窗口，具备出色的逻辑推理能力；内置 Google 搜索、文件操作及 Shell 命令执行等实用工具；更独特的是，它支持 MCP（模型上下文协议），允许用户灵活扩展自定义集成，连接如图像生成等外部能力。此外，个人谷歌账号即可享受免费的额度支持，且项目基于 Apache 2.0 协议完全开源，是提升终端工作效率的理想助手。",100752,"2026-04-10T01:20:03",[52,13,15,14],"插件",{"id":54,"name":55,"github_repo":56,"description_zh":57,"stars":58,"difficulty_score":32,"last_commit_at":59,"category_tags":60,"status":17},4721,"markitdown","microsoft\u002Fmarkitdown","MarkItDown 是一款由微软 AutoGen 团队打造的轻量级 Python 工具，专为将各类文件高效转换为 Markdown 格式而设计。它支持 PDF、Word、Excel、PPT、图片（含 OCR）、音频（含语音转录）、HTML 乃至 YouTube 链接等多种格式的解析，能够精准提取文档中的标题、列表、表格和链接等关键结构信息。\n\n在人工智能应用日益普及的今天，大语言模型（LLM）虽擅长处理文本，却难以直接读取复杂的二进制办公文档。MarkItDown 恰好解决了这一痛点，它将非结构化或半结构化的文件转化为模型“原生理解”且 Token 效率极高的 Markdown 格式，成为连接本地文件与 AI 分析 pipeline 的理想桥梁。此外，它还提供了 MCP（模型上下文协议）服务器，可无缝集成到 Claude Desktop 等 LLM 应用中。\n\n这款工具特别适合开发者、数据科学家及 AI 研究人员使用，尤其是那些需要构建文档检索增强生成（RAG）系统、进行批量文本分析或希望让 AI 助手直接“阅读”本地文件的用户。虽然生成的内容也具备一定可读性，但其核心优势在于为机器",93400,"2026-04-06T19:52:38",[52,14],{"id":62,"github_repo":63,"name":64,"description_en":65,"description_zh":66,"ai_summary_zh":66,"readme_en":67,"readme_zh":68,"quickstart_zh":69,"use_case_zh":70,"hero_image_url":71,"owner_login":72,"owner_name":73,"owner_avatar_url":74,"owner_bio":75,"owner_company":76,"owner_location":77,"owner_email":78,"owner_twitter":72,"owner_website":79,"owner_url":80,"languages":81,"stars":108,"forks":109,"last_commit_at":110,"license":111,"difficulty_score":10,"env_os":112,"env_gpu":113,"env_ram":112,"env_deps":114,"category_tags":123,"github_topics":126,"view_count":32,"oss_zip_url":75,"oss_zip_packed_at":75,"status":17,"created_at":136,"updated_at":137,"faqs":138,"releases":173},7926,"pliang279\u002FMultiBench","MultiBench","[NeurIPS 2021] Multiscale Benchmarks for Multimodal Representation Learning","MultiBench 是一个专为多模态表示学习打造的大规模基准测试平台，由卡内基梅隆大学等机构的研究团队推出。它旨在解决当前多模态研究中资源分散、评估标准不统一的痛点，特别是针对模型在不同领域和模态间的泛化能力、训练与推理的复杂度，以及面对噪声或缺失数据时的鲁棒性等关键挑战。\n\n通过整合来自多媒体、情感计算、机器人、金融及医疗等 6 大领域的 15 个数据集，MultiBench 涵盖了 10 种模态和 20 项预测任务，为研究者提供了一个系统化且统一的评估环境。其独特的技术亮点在于提供了一套自动化的端到端机器学习流水线，极大地简化了从数据处理、模型训练到最终评估的繁琐流程，让研究人员能更专注于算法创新而非工程细节。\n\n这款工具非常适合人工智能领域的研究人员、算法工程师以及高校师生使用。如果你正在探索如何让 AI 更好地同时理解文本、图像、声音等多种信息，或者需要在一个严谨的标准下验证新模型的真实性能，MultiBench 将是你不可或缺的得力助手，帮助推动多模态学习技术向更实用、更稳健的方向发展。","# MultiBench: Multiscale Benchmarks for Multimodal Representation Learning\n\n[MultiBench website](https:\u002F\u002Fcmu-multicomp-lab.github.io\u002Fmultibench\u002F)\n\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fpliang279\u002FMultiBench\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=IN899HIWCF)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fpliang279\u002FMultiBench)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_13d664e1afd7.png)](https:\u002F\u002Fmultibench.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n[Documentation](https:\u002F\u002Fmultibench.readthedocs.io\u002Fen\u002Flatest\u002F), [Tutorials and examples](https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Ftree\u002Fmain\u002Fexamples)\n\n## Contributors\n\nCorrespondence to: \n  - [Paul Pu Liang](http:\u002F\u002Fwww.cs.cmu.edu\u002F~pliang\u002F) (pliang@cs.cmu.edu)\n  - [Yiwei Lyu](https:\u002F\u002Fgithub.com\u002Flvyiwei1) (yiweilyu@umich.edu)\n  - [Xiang Fan](https:\u002F\u002Fgithub.com\u002Fsfanxiang) (xiangfan@cmu.edu)\n  - [Zetian Wu](http:\u002F\u002Fneal-ztwu.github.io) (zwu49@jhu.edu)\n  - [Yun Cheng](https:\u002F\u002Fkapikantzari.github.io) (yc6206@cs.princeton.edu)\n  - [Arav Agarwal](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Farav-agarwal-941b44109\u002F) (arava@andrew.cmu.edu)\n  - [Jason Wu](https:\u002F\u002Fjasonwunix.com\u002F) (jsonwu@cmu.edu)\n  - Leslie Chen (lesliechen1998@gmail.com)\n  - [Peter Wu](https:\u002F\u002Fpeter.onrender.com\u002F) (peterw1@cs.cmu.edu)\n  - [Michelle A. Lee](http:\u002F\u002Fstanford.edu\u002F~mishlee\u002F) (michellelee@cs.stanford.edu)\n  - [Yuke Zhu](https:\u002F\u002Fwww.cs.utexas.edu\u002F~yukez\u002F) (yukez@cs.utexas.edu)\n  - [Ruslan Salakhutdinov](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002F) (rsalakhu@cs.cmu.edu)\n  - [Louis-Philippe Morency](https:\u002F\u002Fwww.cs.cmu.edu\u002F~morency\u002F) (morency@cs.cmu.edu)\n\n## Paper\n\n[**MultiZoo & MultiBench: A Standardized Toolkit for Multimodal Deep Learning**](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume24\u002F22-1021\u002F22-1021.pdf)\u003Cbr>\nPaul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov\u003Cbr>\nJMLR 2022 Open Source Software.\n\n[**MultiBench: Multiscale Benchmarks for Multimodal Representation Learning**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.07502)\u003Cbr>\nPaul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency\u003Cbr>\nNeurIPS 2021 Datasets and Benchmarks Track.\n\nIf you find this repository useful, please cite our paper and corresponding software package:\n```\n@article{liang2023multizoo,\n  title={MULTIZOO \\& MULTIBENCH: A Standardized Toolkit for Multimodal Deep Learning},\n  author={Liang, Paul Pu and Lyu, Yiwei and Fan, Xiang and Agarwal, Arav and Cheng, Yun and Morency, Louis-Philippe and Salakhutdinov, Ruslan},\n  journal={Journal of Machine Learning Research},\n  volume={24},\n  pages={1--7},\n  year={2023}\n}\n```\n```\n@inproceedings{liang2021multibench,\n  title={MultiBench: Multiscale Benchmarks for Multimodal Representation Learning},\n  author={Liang, Paul Pu and Lyu, Yiwei and Fan, Xiang and Wu, Zetian and Cheng, Yun and Wu, Jason and Chen, Leslie Yufan and Wu, Peter and Lee, Michelle A and Zhu, Yuke and others},\n  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},\n  year={2021}\n}\n```\n\n## Overview\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_9acdc79a31fa.png)\n\nLearning multimodal representations involves integrating information from multiple heterogeneous sources of data. It is a challenging yet crucial area with numerous real-world applications in multimedia, affective computing, robotics, finance, human-computer interaction, and healthcare. Unfortunately, multimodal research has seen limited resources to study (1) generalization across domains and modalities, (2) complexity during training and inference, and (3) robustness to noisy and missing modalities.\n\nIn order to accelerate progress towards understudied modalities and tasks while ensuring real-world robustness, we release MultiBench, a systematic and unified large-scale benchmark for multimodal learning spanning 15 datasets, 10 modalities, 20 prediction tasks, and 6 research areas. MultiBench provides an automated end-to-end machine learning pipeline that simplifies and standardizes data loading, experimental setup, and model evaluation. To reflect real-world requirements, MultiBench is designed to holistically evaluate (1) performance across domains and modalities, (2) complexity during training and inference, and (3) robustness to noisy and missing modalities.\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_e3fd124c28e0.png)\n\nTo accompany MultiBench, we also provide a standardized implementation of 20 core approaches in multimodal learning unifying innovations in fusion paradigms, optimization objectives, and training approaches which we call MultiZoo. MultiZoo implements these methods in a modular fashion to enable accessibility for new researchers, compositionality of approaches, and reproducibility of results.\n\n## Datasets currently supported\n\n1. Affective computing: MUStARD, CMU-MOSI, UR-FUNNY, CMU-MOSEI\n2. Healthcare: MIMIC\n3. Robotics: MuJoCo Push, Vision & Touch\n4. Finance: Stocks-food, Stocks-health, Stocks-tech\n5. HCI: ENRICO\n6. Multimedia: AV-MNIST, MM-IMDb, Kinetics-S, Kinetics-L\n7. RTFM env\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_e596b8d5daf4.png)\n\nTo add a new dataset:\n\n1. Go to datasets\u002F\n2. Add a new folder if appropriate\n3. Write a python file with a get_dataloader function that returns a tuple of 3 dataloaders (for train, valid, test data respectively) containing preprocessed data. Please following the existing examples (such as avmnist: datasets\u002Favmnist\u002Fget_data.py)\n4. Go to examples\u002F and write an example training python file following the existing examples\n5. Check that calling the dataloader and running a simple training script works\n\n## Algorithms supported\n\nSee Appendix Section F for detailed descriptions of each part.\n\n1. Unimodal models: MLP, GRU, LeNet, CNN, LSTM, Transformer, FCN, Random Forest, ResNet, etc... (see unimodals\u002F)\n2. Fusion paradigms: early\u002Flate fusion, NL-gate, tensor fusions, Multiplicative Interactions, Low-Rank Tensor Fusion, etc (see fusions\u002F)\n3. Optimization objectives: (default: CrossEntropyLoss for classification tasks, MSELoss for regression tasks), ELBO, Weighted Reconstruction Loss, CCA loss, Contrastive Loss, etc (see objective_functions\u002F)\n4. Training structures: Supervised Learning (which supports Early Fusion, Late Fusion, MVAE, MFM, etc), Gradient Blend, Architecture Search, etc (see training_structures\u002F)\n\nTo add a new algorithm:\n\n1. Figure out which subfolder to add it into:\n- unimodals\u002F : unimodal architectures\n- fusions\u002F : multimodal fusion architectures\n- objective_functions\u002F : objective functions in addition to supervised training loss (e.g., VAE loss, contrastive loss)\n- training_structures\u002F : training algorithms excluding objective functions (e.g., balancing generalization, architecture search outer RL loop)\n2. see examples\u002F and write an example training python file following the existing examples\n3. check that calling the added functions and running a simple training script works\n4. Make sure your new modules are well documented by comments in its input and output format and shapes\n\n## Open call for research areas, datasets, tasks, algorithms, and evaluation \n\nWe welcome new contributions to MultiBench through new research areas, datasets, tasks, algorithms, and evaluation. Please refer to the sections above for instructions on adding new datasets and algorithms, and open a pull request if you would like to see a specific dataset or algorithm added. We plan to use MultiBench as a theme for future workshops, competitions, and academic courses - stay tuned for upcoming calls for participation!\n\n## Experiments\n\n### Affective Computing\n\nWe release the processed datasets: [sarcasm](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1JFcX-NF97zu9ZOZGALGU9kp8dwkP7aJ7?usp=sharing), [mosi](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uEK737LXB9jAlf9kyqRs6B9N6cDncodq?usp=sharing), [mosei](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1A_hTmifi824gypelGobgl2M-5Rw9VWHv?usp=sharing), [humor](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Agzm157lciMONHOHemHRSySmjn1ahHX1?usp=sharing). The original datasets are also publicly available at [MultimodalSDK](https:\u002F\u002Fgithub.com\u002Fmatsuolab\u002FCMU-MultimodalSDK) for MOSI and MOSEI, [MUsTARD](https:\u002F\u002Fgithub.com\u002Fsoujanyaporia\u002FMUStARD) and [UR-Funny](https:\u002F\u002Fgithub.com\u002FROC-HCI\u002FUR-FUNNY). You can obtain processed data with `datasets\u002Faffect\u002Fget_data.py`, note that `sarcasm` means [MUsTARD](https:\u002F\u002Fgithub.com\u002Fsoujanyaporia\u002FMUStARD) and `humor` means [UR-FUNNY](https:\u002F\u002Fgithub.com\u002FROC-HCI\u002FUR-FUNNY).\n\nThere are several example scripts for running affect datasets under examples\u002Faffect\u002F. For example, to run affect datasets with simple late fusion, fistly, you can use\n\n```\ntraindata, validdata, test_robust = get_dataloader('\u002Fhome\u002Fpliang\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl', data_type='mosi')\n```\n\nor if you don't want to use packed data, and expect data with the same max squence length, use `max_pad` and `max_seq_len` options, and remember to set `is_packed=False` in the `train` and `test` functions\n\n```\ntraindata, validdata, testdata = get_dataloader('\u002Fhome\u002Fpliang\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl', data_type='mosi', max_pad=True, max_seq_len=50)\n```\n\nthen do\n\n```\npython3 examples\u002Faffect\u002Faffect_late_fusion.py\n```\n\n### Healthcare\n\nThe MIMIC dataset has restricted access. To gain access to the preprocessed version of this dataset, please follow instructions [here](https:\u002F\u002Fmimic.mit.edu\u002Fiv\u002Faccess\u002F) to gain the necessary credentials. Once you have the credentials, email yiweilyu@umich.edu with proof of your credentials and ask for the preprocessed 'im.pk' file. \n\nAfter you have the 'im.pk' file, you can get the dataloaders of this dataset by calling the get_dataloader function in examples\u002Fmimic\u002Fget_data.py. The get_dataloader function takes 2 inputs: the first specifies which task you want to do (-1 means mortality task, 1 means icd9 10-19 task, 7 means ic9 70-79 task). The input modalities will be static (vector of size 5) and time-series (24x30 shaped).\n\nThere are several example scripts for running MIMIC under examples\u002Fhealthcare\u002F. For example, to run MIMIC with Low Rank Tensor Fusion, do\n\n```\npython3 examples\u002Fhealthcare\u002Fmimic_low_rank_tensor.py\n```\n\n### Robotics\n\n#### Vision & Touch\n\nFor Vision and Touch dataset, the scripts for downloading the dataset is included in dataset\u002Frobotics\u002F folder (download_data.sh). After the data is downloaded, use dataset\u002Frobotics\u002Fdata_loader.py to access the preprocessed dataloaders. Note that this dataset only has train and valid set, so the output will be a tuple of 2 dataloaders instead of 3. The default task is Contact, but you can get the dataloaders for End Effector task by passing in \"output='ee_yaw_next'\" as argument to the get_data function.\n\nFor more detailed information on this dataset, see the original [repo](https:\u002F\u002Fgithub.com\u002Fstanford-iprl-lab\u002Fmultimodal_representation).\n\nThere are several example scripts for running Vision and Touch under examples\u002Frobotics\u002F. For example, to run Vision and Touch with Low Rank Tensor Fusion on Contact Task, do\n\n```\npython3 examples\u002Frobotics\u002FLRTF.py\n```\n\n#### MuJoCo Push (Gentle Push)\n\nThe code for MuJoCo Push experiments can be found under the `examples\u002Fgentle_push` directory. Each model type has its own Python file under this directory, which can be directly executed to run the experiments.\n\nFor example, to run the late fusion model:\n\n```sh\npython examples\u002Fgentle_push\u002FLF.py\n```\n\nThis will also download the dataset to `datasets\u002Fgentle_push\u002Fcache` on the first run. Since the original dataset is hosted on Google Drive, sometimes the automatic download may fail for various reasons. We observed that running on Colab solves the issue. Additionally, you can download these files manually and place them at the correct locations:\n- Download [gentle_push_10.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1qmBCfsAGu8eew-CQFmV1svodl9VJa6fX\u002Fview) to `datasets\u002Fgentle_push\u002Fcache\u002F1qmBCfsAGu8eew-CQFmV1svodl9VJa6fX-gentle_push_10.hdf5`.\n- Download [gentle_push_300.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18dr1z0N__yFiP_DAKxy-Hs9Vy_AsaW6Q\u002Fview) to `datasets\u002Fgentle_push\u002Fcache\u002F18dr1z0N__yFiP_DAKxy-Hs9Vy_AsaW6Q-gentle_push_300.hdf5`.\n- Download [gentle_push_1000.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1JTgmq1KPRK9HYi8BgvljKg5MPqT_N4cR\u002Fview) to `datasets\u002Fgentle_push\u002Fcache\u002F1JTgmq1KPRK9HYi8BgvljKg5MPqT_N4cR-gentle_push_1000.hdf5`.\n\n### Finance\n\nThe code for finance experiments can be found under the `examples\u002Ffinance` directory. Each model type has its own Python file under this directory. Each file accepts two arguments, `--input-stocks` and `--target-stock`. For example, to run simple late fusion on the stocks benchmarked in the paper:\n\n```sh\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'MCD SBUX HSY HRL' --target-stock 'MCD'\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'AAPL MSFT AMZN INTC AMD MSI' --target-stock 'MSFT'\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'MRK WST CVS MCK ABT UNH TFX' --target-stock 'UNH'\n```\n\nYou can specify arbitrary stocks to be downloaded. The data loader will automatically download the data for you. If the stocks do not cover the date range defined in `datasets\u002Fstocks\u002Fget_data.py`, a different date range can be specified.\n\nFor unimodal experiments, run `stocks_early_fusion.py` with the the same stock passed to `--input-stocks` and `--target-stock`.\n\nBelow is a full list of stocks under each category outlined in the paper:\n\n```yaml\nF&B (18): CAG CMG CPB DPZ DRI GIS HRL HSY K KHC LW MCD MDLZ MKC SBUX SJM TSN YUM\nHealth (63): ABT ABBV ABMD A ALXN ALGN ABC AMGN ANTM BAX BDX BIO BIIB BSX BMY CAH CTLT CNC CERN CI COO CVS DHR DVA XRAY DXCM EW GILD HCA HSIC HOLX HUM IDXX ILMN INCY ISRG IQV JNJ LH LLY MCK MDT MRK MTD PKI PRGO PFE DGX REGN RMD STE SYK TFX TMO UNH UHS VAR VRTX VTRS WAT WST ZBH ZTS\nTech (100): AAPL ACN ADBE ADI ADP ADSK AKAM AMAT AMD ANET ANSS APH ATVI AVGO BR CDNS CDW CHTR CMCSA CRM CSCO CTSH CTXS DIS DISCA DISCK DISH DXC EA ENPH FB FFIV FIS FISV FLIR FLT FOX FOXA FTNT GLW GOOG GOOGL GPN HPE HPQ IBM INTC INTU IPG IPGP IT JKHY JNPR KEYS KLAC LRCX LUMN LYV MA MCHP MPWR MSFT MSI MU MXIM NFLX NLOK NOW NTAP NVDA NWS NWSA NXPI OMC ORCL PAYC PAYX PYPL QCOM QRVO SNPS STX SWKS T TEL TER TMUS TRMB TTWO TWTR TXN TYL V VIAC VRSN VZ WDC WU XLNX ZBRA\n```\n\n### HCI\nThe code for HCI experiments can be found under the `examples\u002Fhci` directory.\nOur experiments use the [ENRICO](https:\u002F\u002Fgithub.com\u002Fluileito\u002Fenrico) dataset, which contains application screenshots and their UI layout. App screens are classified into 20 different design categories.\n\nThe unimodal examples can be run using the following commands:\n\nScreenshot modality\n\n```\npython examples\u002Fhci\u002Fenrico_unimodal_0.py\n```\n\nUI Layout modality\n\n```\npython examples\u002Fhci\u002Fenrico_unimodal_1.py\n```\n\nThe multimodal examples are found in the same directory. As an example:\n\nSimple Late Fusion\n\n```\npython examples\u002Fhci\u002Fenrico_simple_late_fusion.py\n```\n\n### Multimedia\n\nTo access AV-MNIST, download the avmnist.tar.gz file from [here](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KvKynJJca5tDtI5Mmp6CoRh9pQywH8Xp\u002Fview?usp=sharing) and untar it. Then, input the location of the avmnist file to the get_dataloader function in datasets\u002Favmnist\u002Fget_data.py script. The input modalities are black-white images (28x28 tensors) and audio spectograms (112x112 tensors).\n\nThere are several example scripts for running AV-MNIST under examples\u002Fmultimedia\u002F. For example, to run Vision and Touch with Simple Late Fusion with Concatenation, do\n```\npython examples\u002Fmultimedia\u002Favmnist_simple_late_fusion.py\n```\n\nTo access MM-IMDb, download the multimodal_imdb.hdf5 from [here](https:\u002F\u002Farchive.org\u002Fdownload\u002Fmmimdb\u002Fmultimodal_imdb.hdf5) and we also use the raw data from [here](https:\u002F\u002Farchive.org\u002Fdownload\u002Fmmimdb\u002Fmmimdb.tar.gz) to test models' robustness. \n\nThere are several example scripts for running MM-IMDb under examples\u002Fmultimedia\u002F. To run experiments, input the location of the hdf5 file to the get_dataloader function in each of the examples. Then, taking Text and Image with Simple Late Fusion with Concatenation for example, do \n```\npython examples\u002Fmultimedia\u002Fmmimdb_simple_late_fusion.py\n```\n\nScripts for the Kinetics dataset are located in the `special` directory. Run `python special\u002Fkinetics_*.py` for the respective script.\n\nTo access Clotho, clone the [clotho-dataset](https:\u002F\u002Fgithub.com\u002Faudio-captioning\u002Fclotho-dataset) repository somewhere on your device and follow the instructions in the ReadMe of that repository to download and preprocess the data (use the one-step preprocess approach). To get the dataloader, input the path to the \"clotho-dataset\" repo to the get_dataloaders function in datasets\u002Fclotho\u002Fget_data.py script. The default data are audio features (padded to 2574x64) and text caption word indices (padded to 20x18).\n\n\n## Evaluation\n\n### Complexity\n\nWe have a script (`eval_scripts\u002Fcomplexity.py`) for recording complexity data for training and testing, including peak memory, number-of-parameters and time for training and number-of-parameters and time for testing. You will need to install [memory_profiler](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemory-profiler\u002F) to run this script. It provides 2 useful functions: `all_in_one_train`, which takes in a function reference of the training process as well as all the modules involved in training and will run the training process and print out total runtime, peak memory and total number of parameters; `all_in_one_test`, which takes a function reference of the testing process as well as all the modules involved in testing and will run the testing process and print out total runtime and total number of parameters. \n\nFor example usage, see `examples\u002Fhealthcare\u002Fmimic_baseline_track_complexity.py` (which adds complexity measuring to the script `examples\u002Fhealthcare\u002Fmimic_baseline.py`)\n\n### Robustness\n\nModality-specific and multimodal imperfection implementations are under `robustness`, organized by modalities. We have a script (`eval_scripts\u002Frobustness.py`) that reports robustness metrics for testing on data of modality-specific and multimodal imperfections. It also plots the performance-imperfection curve and saves to the default directory. \n\nAll robustness experiments are now integrated into the standard training\u002Ftesting scripts.\n\nWe visualize the experiment results using two metrics, relative and effective robustness, as well as a combination of both. These plots indicate the tradeoff between accuracy and robustness:\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_b47bb0e4754d.png)\n\n## References\n\n## Patch Note \u002F Major Updates\n\n6\u002F11\u002F2021: Refactored some code. Specifically, we deprecated the Simple_Early_Fusion, Simple_Late_Fusion, MVAE, MFM, CCA, Contrastive training structures with the new `Supervised_Learning` training structure, and modified some `examples\u002F` files accordingly. We also integrated the dataloaders and testing scripts for robustness experiments into the regular ones. The deprecated training structures as well as their examples can be found in `deprecated_training_structures\u002F` and `deprecated_examples\u002F` folders. The deprecated dataloaders and testing scripts specifically for robustness can be found in `deprecated_dataloaders\u002F` and `deprecated_examples_robust\u002F` folders.\n\n7\u002F9\u002F2021: Added support for Clotho (audio captioning), Yummly-28K (image-text retrieval), RTFM (language-guided reinforcement learning). We plan to use this as a starting point to gradually expand our repo to include QA, retrieval, generative, and RL tasks as well.\n","# MultiBench：多模态表征学习的多尺度基准测试\n\n[MultiBench 官网](https:\u002F\u002Fcmu-multicomp-lab.github.io\u002Fmultibench\u002F)\n\n[![codecov](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fpliang279\u002FMultiBench\u002Fbranch\u002Fmain\u002Fgraph\u002Fbadge.svg?token=IN899HIWCF)](https:\u002F\u002Fcodecov.io\u002Fgh\u002Fpliang279\u002FMultiBench)\n[![Documentation Status](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_13d664e1afd7.png)](https:\u002F\u002Fmultibench.readthedocs.io\u002Fen\u002Flatest\u002F?badge=latest)\n\n[文档](https:\u002F\u002Fmultibench.readthedocs.io\u002Fen\u002Flatest\u002F)，[教程与示例](https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Ftree\u002Fmain\u002Fexamples)\n\n## 贡献者\n\n联系人：\n  - [Paul Pu Liang](http:\u002F\u002Fwww.cs.cmu.edu\u002F~pliang\u002F) (pliang@cs.cmu.edu)\n  - [Yiwei Lyu](https:\u002F\u002Fgithub.com\u002Flvyiwei1) (yiweilyu@umich.edu)\n  - [Xiang Fan](https:\u002F\u002Fgithub.com\u002Fsfanxiang) (xiangfan@cmu.edu)\n  - [Zetian Wu](http:\u002F\u002Fneal-ztwu.github.io) (zwu49@jhu.edu)\n  - [Yun Cheng](https:\u002F\u002Fkapikantzari.github.io) (yc6206@cs.princeton.edu)\n  - [Arav Agarwal](https:\u002F\u002Fwww.linkedin.com\u002Fin\u002Farav-agarwal-941b44109\u002F) (arava@andrew.cmu.edu)\n  - [Jason Wu](https:\u002F\u002Fjasonwunix.com\u002F) (jsonwu@cmu.edu)\n  - Leslie Chen (lesliechen1998@gmail.com)\n  - [Peter Wu](https:\u002F\u002Fpeter.onrender.com\u002F) (peterw1@cs.cmu.edu)\n  - [Michelle A. Lee](http:\u002F\u002Fstanford.edu\u002F~mishlee\u002F) (michellelee@cs.stanford.edu)\n  - [Yuke Zhu](https:\u002F\u002Fwww.cs.utexas.edu\u002F~yukez\u002F) (yukez@cs.utexas.edu)\n  - [Ruslan Salakhutdinov](https:\u002F\u002Fwww.cs.cmu.edu\u002F~rsalakhu\u002F) (rsalakhu@cs.cmu.edu)\n  - [Louis-Philippe Morency](https:\u002F\u002Fwww.cs.cmu.edu\u002F~morency\u002F) (morency@cs.cmu.edu)\n\n## 论文\n\n[**MultiZoo & MultiBench：多模态深度学习的标准化工具包**](https:\u002F\u002Fwww.jmlr.org\u002Fpapers\u002Fvolume24\u002F22-1021\u002F22-1021.pdf)\u003Cbr>\nPaul Pu Liang, Yiwei Lyu, Xiang Fan, Arav Agarwal, Yun Cheng, Louis-Philippe Morency, Ruslan Salakhutdinov\u003Cbr>\nJMLR 2022 开源软件。\n\n[**MultiBench：多模态表征学习的多尺度基准测试**](https:\u002F\u002Farxiv.org\u002Fabs\u002F2107.07502)\u003Cbr>\nPaul Pu Liang, Yiwei Lyu, Xiang Fan, Zetian Wu, Yun Cheng, Jason Wu, Leslie Chen, Peter Wu, Michelle A. Lee, Yuke Zhu, Ruslan Salakhutdinov, Louis-Philippe Morency\u003Cbr>\nNeurIPS 2021 数据集与基准测试赛道。\n\n如果您觉得本仓库有用，请引用我们的论文及相应软件包：\n```\n@article{liang2023multizoo,\n  title={MULTIZOO \\& MULTIBENCH: A Standardized Toolkit for Multimodal Deep Learning},\n  author={Liang, Paul Pu and Lyu, Yiwei and Fan, Xiang and Agarwal, Arav and Cheng, Yun and Morency, Louis-Philippe and Salakhutdinov, Ruslan},\n  journal={Journal of Machine Learning Research},\n  volume={24},\n  pages={1--7},\n  year={2023}\n}\n```\n```\n@inproceedings{liang2021multibench,\n  title={MultiBench: Multiscale Benchmarks for Multimodal Representation Learning},\n  author={Liang, Paul Pu and Lyu, Yiwei and Fan, Xiang and Wu, Zetian and Cheng, Yun and Wu, Jason and Chen, Leslie Yufan and Wu, Peter and Lee, Michelle A and Zhu, Yuke and others},\n  booktitle={Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1)},\n  year={2021}\n}\n```\n\n## 概述\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_9acdc79a31fa.png)\n\n学习多模态表征涉及整合来自多个异构数据源的信息。这是一个充满挑战但又至关重要的领域，在多媒体、情感计算、机器人技术、金融、人机交互和医疗健康等多个现实应用场景中具有广泛的应用价值。然而，目前多模态研究在以下三个方面仍面临资源不足的问题：(1) 跨领域和跨模态的泛化能力；(2) 训练和推理过程中的复杂性；以及 (3) 对噪声和缺失模态的鲁棒性。\n\n为了加速对尚未充分研究的模态和任务的研究进展，并确保其在实际应用中的鲁棒性，我们发布了 MultiBench，这是一个系统化、统一的大规模多模态学习基准测试平台，涵盖 15 个数据集、10 种模态、20 个预测任务和 6 个研究领域。MultiBench 提供了一个自动化的端到端机器学习流水线，简化并标准化了数据加载、实验设置和模型评估流程。为反映真实世界的需求，MultiBench 旨在全面评估：(1) 跨领域和跨模态的性能；(2) 训练和推理过程中的复杂性；以及 (3) 对噪声和缺失模态的鲁棒性。\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_e3fd124c28e0.png)\n\n为配合 MultiBench，我们还提供了一套标准化的多模态学习核心方法实现，即 MultiZoo，它整合了融合范式、优化目标和训练方法等方面的创新成果。MultiZoo 以模块化的方式实现了这些方法，从而便于新研究人员上手，支持方法的组合使用，并确保结果的可重复性。\n\n## 当前支持的数据集\n\n1. 情感计算：MUStARD、CMU-MOSI、UR-FUNNY、CMU-MOSEI\n2. 医疗健康：MIMIC\n3. 机器人技术：MuJoCo Push、Vision & Touch\n4. 金融：股票-食品、股票-健康、股票-科技\n5. 人机交互：ENRICO\n6. 多媒体：AV-MNIST、MM-IMDb、Kinetics-S、Kinetics-L\n7. RTFM 环境\n\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_e596b8d5daf4.png)\n\n添加新数据集的步骤：\n\n1. 进入 datasets\u002F 目录。\n2. 如有必要，创建一个新的文件夹。\n3. 编写一个 Python 文件，其中包含一个 get_dataloader 函数，该函数应返回一个包含三个 DataLoader 的元组（分别对应训练、验证和测试数据），且这些 DataLoader 中应包含预处理后的数据。请参考现有示例（如 avmnist：datasets\u002Favmnist\u002Fget_data.py）。\n4. 进入 examples\u002F 目录，编写一个基于现有示例的训练 Python 示例文件。\n5. 检查调用 DataLoader 并运行简单训练脚本是否正常工作。\n\n## 支持的算法\n\n各部分的详细描述请参见附录 F 部分。\n\n1. 单模态模型：MLP、GRU、LeNet、CNN、LSTM、Transformer、FCN、随机森林、ResNet 等（参见 unimodals\u002F 目录）。\n2. 融合范式：早期\u002F晚期融合、NL-gate、张量融合、乘法交互、低秩张量融合等（参见 fusions\u002F 目录）。\n3. 优化目标：（默认：分类任务使用 CrossEntropyLoss，回归任务使用 MSELoss）、ELBO、加权重建损失、CCA 损失、对比损失等（参见 objective_functions\u002F 目录）。\n4. 训练结构：监督学习（支持早期融合、晚期融合、MVAE、MFM 等）、梯度混合、架构搜索等（参见 training_structures\u002F 目录）。\n\n添加新算法的步骤：\n\n1. 确定将其添加到哪个子目录：\n   - unimodals\u002F：单模态架构。\n   - fusions\u002F：多模态融合架构。\n   - objective_functions\u002F：除监督训练损失外的其他目标函数（如 VAE 损失、对比损失）。\n   - training_structures\u002F：不包括目标函数的训练算法（如平衡泛化、架构搜索的外部强化学习循环）。\n2. 参照 examples\u002F 目录中的示例，编写一个基于现有示例的训练 Python 示例文件。\n3. 检查调用新增函数并运行简单训练脚本是否正常工作。\n4. 确保您的新模块在输入输出格式和形状等方面有清晰的注释说明。\n\n## 研究领域、数据集、任务、算法和评估的公开征集\n\n我们欢迎通过新增研究领域、数据集、任务、算法和评估来为 MultiBench 做出贡献。请参阅上方各部分，了解如何添加新的数据集和算法；如果您希望添加特定的数据集或算法，请提交一个拉取请求。我们计划将 MultiBench 作为未来研讨会、竞赛和学术课程的主题——敬请关注后续的参与邀请！\n\n## 实验\n\n### 情感计算\n\n我们发布了经过处理的数据集：[sarcasm](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1JFcX-NF97zu9ZOZGALGU9kp8dwkP7aJ7?usp=sharing)、[mosi](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1uEK737LXB9jAlf9kyqRs6B9N6cDncodq?usp=sharing)、[mosei](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1A_hTmifi824gypelGobgl2M-5Rw9VWHv?usp=sharing)、[humor](https:\u002F\u002Fdrive.google.com\u002Fdrive\u002Ffolders\u002F1Agzm157lciMONHOHemHRSySmjn1ahHX1?usp=sharing)。原始数据集同样可在以下位置公开获取：MOSI 和 MOSEI 的 [MultimodalSDK](https:\u002F\u002Fgithub.com\u002Fmatsuolab\u002FCMU-MultimodalSDK)，以及 [MUsTARD](https:\u002F\u002Fgithub.com\u002Fsoujanyaporia\u002FMUStARD) 和 [UR-Funny](https:\u002F\u002Fgithub.com\u002FROC-HCI\u002FUR-FUNNY)。您可以通过 `datasets\u002Faffect\u002Fget_data.py` 获取处理后的数据，请注意，“sarcasm”指代的是 [MUsTARD](https:\u002F\u002Fgithub.com\u002Fsoujanyaporia\u002FMUStARD)，而“humor”则指代 [UR-Funny](https:\u002F\u002Fgithub.com\u002FROC-HCI\u002FUR-FUNNY)。\n\nexamples\u002Faffect\u002F 目录下提供了若干运行情感计算数据集的示例脚本。例如，要使用简单的晚期融合方法运行情感数据集，首先可以使用：\n\n```\ntraindata, validdata, test_robust = get_dataloader('\u002Fhome\u002Fpliang\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl', data_type='mosi')\n```\n\n或者，如果您不想使用打包数据，而是希望数据具有相同的最大序列长度，可以使用 `max_pad` 和 `max_seq_len` 参数，并记得在 `train` 和 `test` 函数中将 `is_packed` 设置为 `False`：\n\n```\ntraindata, validdata, testdata = get_dataloader('\u002Fhome\u002Fpliang\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl', data_type='mosi', max_pad=True, max_seq_len=50)\n```\n\n随后执行：\n\n```\npython3 examples\u002Faffect\u002Faffect_late_fusion.py\n```\n\n### 医疗健康\n\nMIMIC 数据集属于受限访问资源。如需获取该数据集的预处理版本，请按照 [此处](https:\u002F\u002Fmimic.mit.edu\u002Fiv\u002Faccess\u002F) 的说明申请必要的访问权限。获得访问凭证后，请将凭证证明发送至 yiweilyu@umich.edu，并索取预处理后的 ‘im.pk’ 文件。\n\n收到 ‘im.pk’ 文件后，您可以通过调用 examples\u002Fmimic\u002Fget_data.py 中的 `get_dataloader` 函数来获取该数据集的数据加载器。`get_dataloader` 函数接受两个参数：第一个参数指定您想要执行的任务（-1 表示死亡率预测任务，1 表示 ICD-9 10–19 任务，7 表示 ICD-9 70–79 任务）。输入模态包括静态特征（大小为 5 的向量）和时间序列数据（形状为 24×30）。\n\nexamples\u002Fhealthcare\u002F 目录下提供了若干运行 MIMIC 数据集的示例脚本。例如，要使用低秩张量融合方法运行 MIMIC 数据集，可执行：\n\n```\npython3 examples\u002Fhealthcare\u002Fmimic_low_rank_tensor.py\n```\n\n### 机器人学\n\n#### 视觉与触觉\n\n对于视觉与触觉数据集，下载脚本已包含在 dataset\u002Frobotics\u002F 文件夹中（download_data.sh）。数据下载完成后，可使用 dataset\u002Frobotics\u002Fdata_loader.py 来访问预处理后的数据加载器。请注意，该数据集仅包含训练集和验证集，因此输出将是一个由两个数据加载器组成的元组，而非三个。默认任务为接触任务，但您也可以通过在 `get_data` 函数中传入 `output='ee_yaw_next'` 参数来获取末端执行器任务的数据加载器。\n\n有关该数据集的更多详细信息，请参阅其原始 [仓库](https:\u002F\u002Fgithub.com\u002Fstanford-iprl-lab\u002Fmultimodal_representation)。\n\nexamples\u002Frobotics\u002F 目录下提供了若干运行视觉与触觉数据集的示例脚本。例如，要在接触任务上使用低秩张量融合方法运行视觉与触觉数据集，可执行：\n\n```\npython3 examples\u002Frobotics\u002FLRTF.py\n```\n\n#### MuJoCo 推送（轻推）\n\nMuJoCo 推送实验的相关代码位于 `examples\u002Fgentle_push` 目录下。该目录下的每个模型类型都有各自的 Python 文件，可以直接运行以进行实验。\n\n例如，要运行晚期融合模型：\n\n```sh\npython examples\u002Fgentle_push\u002FLF.py\n```\n\n首次运行时，该命令还会将数据集下载到 `datasets\u002Fgentle_push\u002Fcache` 目录下。由于原始数据集托管在 Google Drive 上，有时自动下载可能会因各种原因失败。我们观察到，在 Colab 上运行可以解决这一问题。此外，您也可以手动下载这些文件并将其放置到正确的位置：\n\n- 下载 [gentle_push_10.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1qmBCfsAGu8eew-CQFmV1svodl9VJa6fX\u002Fview) 至 `datasets\u002Fgentle_push\u002Fcache\u002F1qmBCfsAGu8eew-CQFmV1svodl9VJa6fX-gentle_push_10.hdf5`。\n- 下载 [gentle_push_300.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F18dr1z0N__yFiP_DAKxy-Hs9Vy_AsaW6Q\u002Fview) 至 `datasets\u002Fgentle_push\u002Fcache\u002F18dr1z0N__yFiP_DAKxy-Hs9Vy_AsaW6Q-gentle_push_300.hdf5`。\n- 下载 [gentle_push_1000.hdf5](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1JTgmq1KPRK9HYi8BgvljKg5MPqT_N4cR\u002Fview) 至 `datasets\u002Fgentle_push\u002Fcache\u002F1JTgmq1KPRK9HYi8BgvljKg5MPqT_N4cR-gentle_push_1000.hdf5`。\n\n### 金融\n\n金融实验的代码位于 `examples\u002Ffinance` 目录下。该目录下每个模型类型都有一个对应的 Python 文件。每个文件接受两个参数：`--input-stocks` 和 `--target-stock`。例如，要在论文中基准测试的股票上运行简单的晚期融合：\n\n```sh\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'MCD SBUX HSY HRL' --target-stock 'MCD'\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'AAPL MSFT AMZN INTC AMD MSI' --target-stock 'MSFT'\npython examples\u002Ffinance\u002Fstocks_late_fusion.py --input-stocks 'MRK WST CVS MCK ABT UNH TFX' --target-stock 'UNH'\n```\n\n您可以指定任意要下载的股票。数据加载器会自动为您下载数据。如果这些股票不涵盖 `datasets\u002Fstocks\u002Fget_data.py` 中定义的时间范围，则可以指定不同的时间范围。\n\n对于单模态实验，请使用相同的股票作为 `--input-stocks` 和 `--target-stock` 参数，运行 `stocks_early_fusion.py`。\n\n以下是论文中概述的每个类别下的完整股票列表：\n\n```yaml\n食品与饮料 (18): CAG CMG CPB DPZ DRI GIS HRL HSY K KHC LW MCD MDLZ MKC SBUX SJM TSN YUM\n健康 (63): ABT ABBV ABMD A ALXN ALGN ABC AMGN ANTM BAX BDX BIO BIIB BSX BMY CAH CTLT CNC CERN CI COO CVS DHR DVA XRAY DXCM EW GILD HCA HSIC HOLX HUM IDXX ILMN INCY ISRG IQV JNJ LH LLY MCK MDT MRK MTD PKI PRGO PFE DGX REGN RMD STE SYK TFX TMO UNH UHS VAR VRTX VTRS WAT WST ZBH ZTS\n科技 (100): AAPL ACN ADBE ADI ADP ADSK AKAM AMAT AMD ANET ANSS APH ATVI AVGO BR CDNS CDW CHTR CMCSA CRM CSCO CTSH CTXS DIS DISCA DISCK DISH DXC EA ENPH FB FFIV FIS FISV FLIR FLT FOX FOXA FTNT GLW GOOG GOOGL GPN HPE HPQ IBM INTC INTU IPG IPGP IT JKHY JNPR KEYS KLAC LRCX LUMN LYV MA MCHP MPWR MSFT MSI MU MXIM NFLX NLOK NOW NTAP NVDA NWS NWSA NXPI OMC ORCL PAYC PAYX PYPL QCOM QRVO SNPS STX SWKS T TEL TER TMUS TRMB TTWO TWTR TXN TYL V VIAC VRSN VZ WDC WU XLNX ZBRA\n```\n\n### 人机交互\n\n人机交互实验的代码位于 `examples\u002Fhci` 目录下。\n我们的实验使用 [ENRICO](https:\u002F\u002Fgithub.com\u002Fluileito\u002Fenrico) 数据集，其中包含应用程序截图及其 UI 布局。应用屏幕被分为 20 种不同的设计类别。\n\n单模态示例可以通过以下命令运行：\n\n截图模态\n\n```\npython examples\u002Fhci\u002Fenrico_unimodal_0.py\n```\n\nUI 布局模态\n\n```\npython examples\u002Fhci\u002Fenrico_unimodal_1.py\n```\n\n多模态示例也在同一目录中。例如：\n\n简单晚期融合\n\n```\npython examples\u002Fhci\u002Fenrico_simple_late_fusion.py\n```\n\n### 多媒体\n\n要访问 AV-MNIST，请从 [这里](https:\u002F\u002Fdrive.google.com\u002Ffile\u002Fd\u002F1KvKynJJca5tDtI5Mmp6CoRh9pQywH8Xp\u002Fview?usp=sharing) 下载 avmnist.tar.gz 文件并解压。然后，将 avmnist 文件的位置输入到 `datasets\u002Favmnist\u002Fget_data.py` 脚本中的 get_dataloader 函数中。输入模态为黑白图像（28x28 张量）和音频频谱图（112x112 张量）。\n\n在 `examples\u002Fmultimedia\u002F` 目录下有几个运行 AV-MNIST 的示例脚本。例如，要使用简单晚期融合和拼接方法运行视觉和触觉模态，可以执行：\n```\npython examples\u002Fmultimedia\u002Favmnist_simple_late_fusion.py\n```\n\n要访问 MM-IMDb，请从 [这里](https:\u002F\u002Farchive.org\u002Fdownload\u002Fmmimdb\u002Fmultimodal_imdb.hdf5) 下载 multimodal_imdb.hdf5 文件，并且我们还使用来自 [这里](https:\u002F\u002Farchive.org\u002Fdownload\u002Fmmimdb\u002Fmmimdb.tar.gz) 的原始数据来测试模型的鲁棒性。\n\n在 `examples\u002Fmultimedia\u002F` 目录下有几个运行 MM-IMDb 的示例脚本。要运行实验，需要将 hdf5 文件的位置输入到每个示例中的 get_dataloader 函数中。然后，以文本和图像为例，使用简单晚期融合和拼接方法，可以执行：\n```\npython examples\u002Fmultimedia\u002Fmmimdb_simple_late_fusion.py\n```\n\nKinetics 数据集的脚本位于 `special` 目录中。运行相应的脚本时，请使用 `python special\u002Fkinetics_*.py`。\n\n要访问 Clotho 数据集，需在您的设备上克隆 [clotho-dataset](https:\u002F\u002Fgithub.com\u002Faudio-captioning\u002Fclotho-dataset) 仓库，并按照该仓库的 ReadMe 文件中的说明下载和预处理数据（采用一步式预处理方法）。要获取数据加载器，需将 “clotho-dataset” 仓库的路径输入到 `datasets\u002Fclotho\u002Fget_data.py` 脚本中的 get_dataloaders 函数中。默认数据包括音频特征（填充至 2574x64）和文本字幕索引（填充至 20x18）。\n\n## 评估\n\n### 复杂度\n\n我们有一个脚本 (`eval_scripts\u002Fcomplexity.py`) 用于记录训练和测试的复杂度数据，包括峰值内存、参数数量和训练时间，以及测试时的参数数量和时间。您需要安装 [memory_profiler](https:\u002F\u002Fpypi.org\u002Fproject\u002Fmemory-profiler\u002F) 才能运行此脚本。它提供了两个有用的功能：`all_in_one_train`，可接收训练过程的函数引用以及所有参与训练的模块，运行训练过程并输出总运行时间、峰值内存和总参数数量；`all_in_one_test`，可接收测试过程的函数引用以及所有参与测试的模块，运行测试过程并输出总运行时间和总参数数量。\n\n有关示例用法，请参阅 `examples\u002Fhealthcare\u002Fmimic_baseline_track_complexity.py`（该脚本将复杂度测量添加到了 `examples\u002Fhealthcare\u002Fmimic_baseline.py` 脚本中）。\n\n### 鲁棒性\n\n特定于模态和多模态的不完美实现位于 `robustness` 目录下，按模态分类组织。我们有一个脚本 (`eval_scripts\u002Frobustness.py`) 可以报告针对特定模态和多模态不完美数据的测试鲁棒性指标。它还会绘制性能-不完美曲线并保存到默认目录。\n\n所有鲁棒性实验现在都已集成到标准的训练\u002F测试脚本中。\n\n我们使用相对鲁棒性和有效鲁棒性这两个指标，以及两者的结合，来可视化实验结果。这些图表显示了准确性和鲁棒性之间的权衡：\n![](https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_readme_b47bb0e4754d.png)\n\n## 参考文献\n\n## 补丁说明 \u002F 重大更新\n\n2021年6月11日：重构了部分代码。具体来说，我们弃用了 Simple_Early_Fusion、Simple_Late_Fusion、MVAE、MFM、CCA 以及对比学习等训练结构，代之以新的 `Supervised_Learning` 训练结构，并相应地修改了 `examples\u002F` 目录下的部分文件。此外，我们将用于鲁棒性实验的数据加载器和测试脚本整合到了常规版本中。被弃用的训练结构及其示例位于 `deprecated_training_structures\u002F` 和 `deprecated_examples\u002F` 文件夹中；专门用于鲁棒性实验的已弃用数据加载器和测试脚本则位于 `deprecated_dataloaders\u002F` 和 `deprecated_examples_robust\u002F` 文件夹中。\n\n2021年7月9日：新增对 Clotho（音频字幕生成）、Yummly-28K（图像-文本检索）以及 RTFM（语言引导的强化学习）任务的支持。我们计划以此为起点，逐步扩展该仓库，使其涵盖问答、检索、生成式任务以及强化学习等多种任务类型。","# MultiBench 快速上手指南\n\nMultiBench 是一个系统化、统一的大规模多模态学习基准测试工具，涵盖 15 个数据集、10 种模态和 20 项预测任务。它提供了自动化的端到端机器学习流水线，简化了数据加载、实验设置和模型评估，旨在全面评估模型在跨域泛化、训练复杂度以及对噪声和缺失模态的鲁棒性方面的表现。\n\n## 环境准备\n\n在开始之前，请确保您的开发环境满足以下要求：\n\n*   **操作系统**: Linux (推荐 Ubuntu 18.04+) 或 macOS。Windows 用户建议使用 WSL2。\n*   **Python 版本**: Python 3.7 或更高版本。\n*   **深度学习框架**: PyTorch (建议版本 1.8+)。\n*   **其他依赖**: `numpy`, `scikit-learn`, `pandas` 等常用科学计算库。\n\n**前置依赖安装建议：**\n推荐使用 `conda` 创建独立虚拟环境以避免依赖冲突。国内用户可使用清华或中科大镜像源加速安装。\n\n```bash\n# 创建并激活虚拟环境\nconda create -n multibench python=3.8\nconda activate multibench\n\n# 安装 PyTorch (以 CUDA 11.8 为例，国内用户推荐替换为清华源)\npip install torch torchvision torchaudio --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n\n# 安装其他基础依赖\npip install numpy pandas scikit-learn tqdm matplotlib --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n## 安装步骤\n\n克隆 MultiBench 仓库并安装项目依赖：\n\n```bash\n# 克隆仓库 (国内用户若速度慢可使用 Gitee 镜像或代理)\ngit clone https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench.git\ncd MultiBench\n\n# 安装项目所需依赖\npip install -r requirements.txt --index-url https:\u002F\u002Fpypi.tuna.tsinghua.edu.cn\u002Fsimple\n```\n\n> **注意**：如果根目录下没有 `requirements.txt`，请根据 `examples` 中的具体示例脚本按需安装缺失的库，或直接运行示例脚本让报错提示引导安装。\n\n## 基本使用\n\nMultiBench 的核心工作流分为三步：**获取数据加载器 (DataLoader)** -> **定义模型\u002F融合策略** -> **运行训练脚本**。\n\n以下以情感计算领域的 **CMU-MOSI** 数据集为例，演示如何使用简单的“晚期融合 (Late Fusion)\"策略进行训练。\n\n### 1. 准备数据\n部分预处理后的数据集需要手动下载。对于 MOSI 数据集，请下载处理好的 `.pkl` 文件（参考 README 中的 Google Drive 链接），并将其放置在本地路径，例如 `\u002Fhome\u002Fuser\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl`。\n\n### 2. 编写或运行示例脚本\nMultiBench 在 `examples\u002F` 目录下提供了各领域的标准示例。您可以直接修改现有的示例脚本，或参考其逻辑编写新脚本。\n\n**核心代码逻辑示例：**\n\n```python\nfrom datasets.avmnist.get_data import get_dataloader # 此处仅为导入示例，实际需引用对应数据集模块\n# 针对 MOSI 数据集，通常引用 datasets.affect.get_data 或类似路径\nfrom datasets.affect.get_data import get_dataloader \n\n# 1. 获取数据加载器\n# 参数说明：数据文件路径，数据类型，是否填充至最大长度，最大序列长度\ntraindata, validdata, testdata = get_dataloader(\n    '\u002Fhome\u002Fuser\u002Fmultibench\u002Faffect\u002Fpack\u002Fmosi\u002Fmosi_raw.pkl', \n    data_type='mosi', \n    max_pad=True, \n    max_seq_len=50\n)\n\n# 2. 定义模型与训练流程\n# MultiBench 内置了多种单模态模型 (unimodals) 和融合策略 (fusions)\n# 具体的模型实例化和训练循环请参考 examples\u002Faffect\u002Faffect_late_fusion.py\n```\n\n### 3. 执行训练\n直接使用仓库中提供的示例脚本运行训练。确保已正确修改脚本中的数据路径。\n\n```bash\n# 运行情感计算领域的晚期融合示例\npython3 examples\u002Faffect\u002Faffect_late_fusion.py\n```\n\n### 扩展其他领域\nMultiBench 支持医疗、机器人、金融等多个领域，使用方法类似：\n*   **医疗 (MIMIC)**: 需先申请权限获取 `im.pk` 文件，然后运行 `python3 examples\u002Fhealthcare\u002Fmimic_low_rank_tensor.py`。\n*   **机器人 (Vision & Touch)**: 运行 `datasets\u002Frobotics\u002Fdownload_data.sh` 下载数据后，执行 `python3 examples\u002Frobotics\u002FLRTF.py`。\n\n更多详细算法实现（如张量融合、对比学习损失函数等）可查看 `fusions\u002F`, `objective_functions\u002F`, 和 `training_structures\u002F` 目录。","某医疗 AI 团队正致力于开发一套能结合患者面部视频、语音语调及电子病历文本的多模态抑郁症辅助诊断系统。\n\n### 没有 MultiBench 时\n- **数据整合极其耗时**：团队需手动收集并清洗来自不同源的视频、音频和文本数据，格式不统一导致预处理代码重复编写，耗费数周时间。\n- **评估标准混乱**：缺乏统一的基准测试，难以判断模型在“缺失某种模态”（如只有文本无视频）时的鲁棒性，实验结果无法与学术界现有成果公平对比。\n- **泛化能力验证困难**：仅能在单一数据集上训练，无法快速验证模型是否能迁移到其他医疗场景或应对噪声干扰，上线风险高。\n- **复现成本高昂**：参考论文中的多模态融合方法时，因缺少标准化的流水线，复现基线模型往往需要从头搭建架构，效率低下。\n\n### 使用 MultiBench 后\n- **一键加载多源数据**：直接调用 MultiBench 内置的 15 个数据集接口，自动对齐视频、音频和文本模态，将数据准备周期从数周缩短至几天。\n- **标准化鲁棒性测试**：利用其预置的评估协议，快速量化模型在模态缺失或含噪情况下的性能表现，确保系统在真实临床环境中的稳定性。\n- **跨域泛化轻松验证**：借助覆盖 6 大研究领域的基准任务，团队能迅速测试算法在不同模态组合下的泛化边界，提前发现潜在缺陷。\n- **高效复现与迭代**：通过端到端的自动化机器学习流水线，直接复用官方提供的基线模型进行对比实验，让研发重心回归到核心算法创新上。\n\nMultiBench 通过提供统一的大规模基准和自动化流水线，彻底解决了多模态学习中数据割裂与评估缺失的难题，显著加速了从理论研究到实际落地的进程。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002Fpliang279_MultiBench_9acdc79a.png","pliang279","Paul Liang","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002Fpliang279_13d6d50a.jpg",null,"Massachusetts Institute of Technology","Cambridge, MA","ppliang@mit.edu","https:\u002F\u002Fpliang279.github.io\u002F","https:\u002F\u002Fgithub.com\u002Fpliang279",[82,86,90,94,98,102,105],{"name":83,"color":84,"percentage":85},"HTML","#e34c26",70.3,{"name":87,"color":88,"percentage":89},"Python","#3572A5",27.4,{"name":91,"color":92,"percentage":93},"JavaScript","#f1e05a",1.5,{"name":95,"color":96,"percentage":97},"CSS","#663399",0.8,{"name":99,"color":100,"percentage":101},"Shell","#89e051",0,{"name":103,"color":104,"percentage":101},"Batchfile","#C1F12E",{"name":106,"color":107,"percentage":101},"Makefile","#427819",627,93,"2026-04-13T08:30:06","MIT","未说明","未说明 (项目涉及多模态深度学习及 MuJoCo 机器人仿真，通常建议配备 NVIDIA GPU，但 README 未明确具体型号或显存要求)",{"notes":115,"python":116,"dependencies":117},"1. 部分数据集（如 MIMIC 医疗数据）访问受限，需申请凭证并联系作者获取预处理文件。\n2. 机器人任务（MuJoCo Push）的数据集托管在 Google Drive，自动下载可能失败，建议在 Colab 环境运行或手动下载。\n3. Vision & Touch 数据集仅包含训练集和验证集。\n4. 项目提供标准化的数据加载器（get_dataloader）和多种融合范式、优化目标的实现。","3.x (README 示例中使用 python3，未指定具体小版本)",[118,119,120,121,122],"torch (PyTorch)","numpy","scikit-learn (Random Forest 提及)","transformers (Transformer 架构提及)","gym\u002Fmujoco (机器人任务依赖)",[35,14,124,125,15],"其他","音频",[127,128,129,130,131,132,133,134,135],"machine-learning","multimodal-learning","robotics","natural-language-processing","computer-vision","deep-learning","healthcare","representation-learning","speech-processing","2026-03-27T02:49:30.150509","2026-04-16T10:46:47.932980",[139,144,149,154,159,164,168],{"id":140,"question_zh":141,"answer_zh":142,"source_url":143},35766,"MOSEI 和 MOSI 数据集的原始视频链接失效（404），如何获取原始数据？","由于 CMU-MOSI\u002FMOSEI 数据集作者在约一年前更新了策略，移除了所有原始数据的公开访问权限，因此原链接已失效。如果您确实需要访问原始视频而非处理后的特征，建议直接联系该数据集的作者获取。目前仓库中提供的是预处理后的特征数据。","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F35",{"id":145,"question_zh":146,"answer_zh":147,"source_url":148},35767,"MOSEI 数据集中的标签为什么是浮点数（如 1.333）而不是整数？","MOSEI 是一个回归数据集，其标签代表情感强度。每个数据实例的标签是标注者给出的平均分值，因此是 -3.0 到 3.0 之间的浮点数。这是评估回归任务的传统方法（使用 L1Loss 计算 MAE）。如果您需要进行分类任务（如二分类、五分类或七分类），可以将这些浮点数标签转换为整数类别（例如将正负值划分为两类）。","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F8",{"id":150,"question_zh":151,"answer_zh":152,"source_url":153},35768,"加载 MOSEI 数据集时，视频和音频数据的形状为什么是 [batch_size, sequence_length, feature_dim] 而不是标准的视频格式？","这是因为数据集中的视频和音频数据已经是预处理过的特征（分别使用 OpenFace 或 Facet 提取），而不是原始像素帧。因此，默认格式为 [batch_size, 序列长度，特征维度]（例如视频为 [batchsize, 50, 35]）。如果您需要使用标准的视频格式（如 [batchsize, channel, clip_length, crop_size, crop_size]）来训练视频模型，可以从原始数据集来源（http:\u002F\u002Fimmortal.multicomp.cs.cmu.edu\u002Fraw_datasets\u002F）下载原始视频自行处理。","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F16",{"id":155,"question_zh":156,"answer_zh":157,"source_url":158},35769,"为什么在 MOSEI 数据集上将分类任务作为回归任务来训练？应该使用什么损失函数？","默认情况下，MOSEI 被视为回归数据集，文献中通常使用回归损失（如 MAE\u002FL1Loss），因为这样可以保留情感强度的信息，而简单的 0\u002F1 标签会丢失这部分信息。一般来说，较好的 MAE 也能保证较好的准确率（Acc）。不过，您也可以使用 BCE 或 CrossEntropy 等分类损失函数，将任务转化为传统的分类设置来评估您的模型，这也是一种标准的做法。","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F12",{"id":160,"question_zh":161,"answer_zh":162,"source_url":163},35770,"运行 IMDB 数据集相关代码时报错缺少 'blocks' 或出现语法错误，如何解决？","IMDB 数据集相关的旧代码依赖 theano 和 blocks，这可能导致环境兼容性问题。维护者已在仓库中推送了新的环境配置文件（environment file）。请尝试使用仓库中提供的最新环境文件重新配置您的运行环境，以确保与项目工作时的环境一致。如果问题仍然存在，可能是由于重构代码时处理数据的字典未更新，请检查并下载最新版本的处理后数据。","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F9",{"id":165,"question_zh":166,"answer_zh":167,"source_url":153},35771,"如何打开 MOSEI 数据集中的 .csd 标签文件以及如何获取视频文件名列表？",".csd 文件本质上就是 .h5 文件，您可以使用 Python 的 h5py 库打开它。文件中包含了文件名信息，您只需在读取到的名称后添加 '.mp4' 后缀即可得到视频文件名。如果需要更详细的结构信息，可以参考多模态 SDK 文档。",{"id":169,"question_zh":170,"answer_zh":171,"source_url":172},35772,"代码更新后无法运行，导入报错或路径问题如何解决？","最近的代码更改可能导致了路径导入问题（例如 `sys.path.append` 的位置变化）。维护者建议在调用 `get_dataloader` 时添加 `task` 参数以适配新接口。例如：\n```python\ntraindata, validdata, test_robust = get_dataloader(\n    'path\u002Fto\u002Fmosi_data.pkl', batch_size=128, max_pad=True, robust_test=False, data_type='mosi')\n```\n此外，如果遇到多模态交互维度的不一致错误，可以尝试修改融合层的计算方式，例如使用：\n```python\nreturn torch.einsum('bm, bmp -> bp', modalities[2], self.a(modalities[0:2])) + self.b(modalities[0:2])\n```","https:\u002F\u002Fgithub.com\u002Fpliang279\u002FMultiBench\u002Fissues\u002F15",[]]