[{"data":1,"prerenderedAt":-1},["ShallowReactive",2],{"similar-AlgoTraders--stock-analysis-engine":3,"tool-AlgoTraders--stock-analysis-engine":64},[4,17,27,35,44,52],{"id":5,"name":6,"github_repo":7,"description_zh":8,"stars":9,"difficulty_score":10,"last_commit_at":11,"category_tags":12,"status":16},3808,"stable-diffusion-webui","AUTOMATIC1111\u002Fstable-diffusion-webui","stable-diffusion-webui 是一个基于 Gradio 构建的网页版操作界面，旨在让用户能够轻松地在本地运行和使用强大的 Stable Diffusion 图像生成模型。它解决了原始模型依赖命令行、操作门槛高且功能分散的痛点，将复杂的 AI 绘图流程整合进一个直观易用的图形化平台。\n\n无论是希望快速上手的普通创作者、需要精细控制画面细节的设计师，还是想要深入探索模型潜力的开发者与研究人员，都能从中获益。其核心亮点在于极高的功能丰富度：不仅支持文生图、图生图、局部重绘（Inpainting）和外绘（Outpainting）等基础模式，还独创了注意力机制调整、提示词矩阵、负向提示词以及“高清修复”等高级功能。此外，它内置了 GFPGAN 和 CodeFormer 等人脸修复工具，支持多种神经网络放大算法，并允许用户通过插件系统无限扩展能力。即使是显存有限的设备，stable-diffusion-webui 也提供了相应的优化选项，让高质量的 AI 艺术创作变得触手可及。",162132,3,"2026-04-05T11:01:52",[13,14,15],"开发框架","图像","Agent","ready",{"id":18,"name":19,"github_repo":20,"description_zh":21,"stars":22,"difficulty_score":23,"last_commit_at":24,"category_tags":25,"status":16},1381,"everything-claude-code","affaan-m\u002Feverything-claude-code","everything-claude-code 是一套专为 AI 编程助手（如 Claude Code、Codex、Cursor 等）打造的高性能优化系统。它不仅仅是一组配置文件，而是一个经过长期实战打磨的完整框架，旨在解决 AI 代理在实际开发中面临的效率低下、记忆丢失、安全隐患及缺乏持续学习能力等核心痛点。\n\n通过引入技能模块化、直觉增强、记忆持久化机制以及内置的安全扫描功能，everything-claude-code 能显著提升 AI 在复杂任务中的表现，帮助开发者构建更稳定、更智能的生产级 AI 代理。其独特的“研究优先”开发理念和针对 Token 消耗的优化策略，使得模型响应更快、成本更低，同时有效防御潜在的攻击向量。\n\n这套工具特别适合软件开发者、AI 研究人员以及希望深度定制 AI 工作流的技术团队使用。无论您是在构建大型代码库，还是需要 AI 协助进行安全审计与自动化测试，everything-claude-code 都能提供强大的底层支持。作为一个曾荣获 Anthropic 黑客大奖的开源项目，它融合了多语言支持与丰富的实战钩子（hooks），让 AI 真正成长为懂上",140436,2,"2026-04-05T23:32:43",[13,15,26],"语言模型",{"id":28,"name":29,"github_repo":30,"description_zh":31,"stars":32,"difficulty_score":23,"last_commit_at":33,"category_tags":34,"status":16},2271,"ComfyUI","Comfy-Org\u002FComfyUI","ComfyUI 是一款功能强大且高度模块化的视觉 AI 引擎，专为设计和执行复杂的 Stable Diffusion 图像生成流程而打造。它摒弃了传统的代码编写模式，采用直观的节点式流程图界面，让用户通过连接不同的功能模块即可构建个性化的生成管线。\n\n这一设计巧妙解决了高级 AI 绘图工作流配置复杂、灵活性不足的痛点。用户无需具备编程背景，也能自由组合模型、调整参数并实时预览效果，轻松实现从基础文生图到多步骤高清修复等各类复杂任务。ComfyUI 拥有极佳的兼容性，不仅支持 Windows、macOS 和 Linux 全平台，还广泛适配 NVIDIA、AMD、Intel 及苹果 Silicon 等多种硬件架构，并率先支持 SDXL、Flux、SD3 等前沿模型。\n\n无论是希望深入探索算法潜力的研究人员和开发者，还是追求极致创作自由度的设计师与资深 AI 绘画爱好者，ComfyUI 都能提供强大的支持。其独特的模块化架构允许社区不断扩展新功能，使其成为当前最灵活、生态最丰富的开源扩散模型工具之一，帮助用户将创意高效转化为现实。",107662,"2026-04-03T11:11:01",[13,14,15],{"id":36,"name":37,"github_repo":38,"description_zh":39,"stars":40,"difficulty_score":10,"last_commit_at":41,"category_tags":42,"status":16},4292,"Deep-Live-Cam","hacksider\u002FDeep-Live-Cam","Deep-Live-Cam 是一款专注于实时换脸与视频生成的开源工具，用户仅需一张静态照片，即可通过“一键操作”实现摄像头画面的即时变脸或制作深度伪造视频。它有效解决了传统换脸技术流程繁琐、对硬件配置要求极高以及难以实时预览的痛点，让高质量的数字内容创作变得触手可及。\n\n这款工具不仅适合开发者和技术研究人员探索算法边界，更因其极简的操作逻辑（仅需三步：选脸、选摄像头、启动），广泛适用于普通用户、内容创作者、设计师及直播主播。无论是为了动画角色定制、服装展示模特替换，还是制作趣味短视频和直播互动，Deep-Live-Cam 都能提供流畅的支持。\n\n其核心技术亮点在于强大的实时处理能力，支持口型遮罩（Mouth Mask）以保留使用者原始的嘴部动作，确保表情自然精准；同时具备“人脸映射”功能，可同时对画面中的多个主体应用不同面孔。此外，项目内置了严格的内容安全过滤机制，自动拦截涉及裸露、暴力等不当素材，并倡导用户在获得授权及明确标注的前提下合规使用，体现了技术发展与伦理责任的平衡。",88924,"2026-04-06T03:28:53",[13,14,15,43],"视频",{"id":45,"name":46,"github_repo":47,"description_zh":48,"stars":49,"difficulty_score":23,"last_commit_at":50,"category_tags":51,"status":16},3704,"NextChat","ChatGPTNextWeb\u002FNextChat","NextChat 是一款轻量且极速的 AI 助手，旨在为用户提供流畅、跨平台的大模型交互体验。它完美解决了用户在多设备间切换时难以保持对话连续性，以及面对众多 AI 模型不知如何统一管理的痛点。无论是日常办公、学习辅助还是创意激发，NextChat 都能让用户随时随地通过网页、iOS、Android、Windows、MacOS 或 Linux 端无缝接入智能服务。\n\n这款工具非常适合普通用户、学生、职场人士以及需要私有化部署的企业团队使用。对于开发者而言，它也提供了便捷的自托管方案，支持一键部署到 Vercel 或 Zeabur 等平台。\n\nNextChat 的核心亮点在于其广泛的模型兼容性，原生支持 Claude、DeepSeek、GPT-4 及 Gemini Pro 等主流大模型，让用户在一个界面即可自由切换不同 AI 能力。此外，它还率先支持 MCP（Model Context Protocol）协议，增强了上下文处理能力。针对企业用户，NextChat 提供专业版解决方案，具备品牌定制、细粒度权限控制、内部知识库整合及安全审计等功能，满足公司对数据隐私和个性化管理的高标准要求。",87618,"2026-04-05T07:20:52",[13,26],{"id":53,"name":54,"github_repo":55,"description_zh":56,"stars":57,"difficulty_score":23,"last_commit_at":58,"category_tags":59,"status":16},2268,"ML-For-Beginners","microsoft\u002FML-For-Beginners","ML-For-Beginners 是由微软推出的一套系统化机器学习入门课程，旨在帮助零基础用户轻松掌握经典机器学习知识。这套课程将学习路径规划为 12 周，包含 26 节精炼课程和 52 道配套测验，内容涵盖从基础概念到实际应用的完整流程，有效解决了初学者面对庞大知识体系时无从下手、缺乏结构化指导的痛点。\n\n无论是希望转型的开发者、需要补充算法背景的研究人员，还是对人工智能充满好奇的普通爱好者，都能从中受益。课程不仅提供了清晰的理论讲解，还强调动手实践，让用户在循序渐进中建立扎实的技能基础。其独特的亮点在于强大的多语言支持，通过自动化机制提供了包括简体中文在内的 50 多种语言版本，极大地降低了全球不同背景用户的学习门槛。此外，项目采用开源协作模式，社区活跃且内容持续更新，确保学习者能获取前沿且准确的技术资讯。如果你正寻找一条清晰、友好且专业的机器学习入门之路，ML-For-Beginners 将是理想的起点。",84991,"2026-04-05T10:45:23",[14,60,43,61,15,62,26,13,63],"数据工具","插件","其他","音频",{"id":65,"github_repo":66,"name":67,"description_en":68,"description_zh":69,"ai_summary_zh":69,"readme_en":70,"readme_zh":71,"quickstart_zh":72,"use_case_zh":73,"hero_image_url":74,"owner_login":75,"owner_name":75,"owner_avatar_url":76,"owner_bio":77,"owner_company":77,"owner_location":77,"owner_email":77,"owner_twitter":77,"owner_website":77,"owner_url":78,"languages":79,"stars":106,"forks":107,"last_commit_at":108,"license":77,"difficulty_score":109,"env_os":110,"env_gpu":111,"env_ram":112,"env_deps":113,"category_tags":122,"github_topics":123,"view_count":23,"oss_zip_url":77,"oss_zip_packed_at":77,"status":16,"created_at":144,"updated_at":145,"faqs":146,"releases":172},4119,"AlgoTraders\u002Fstock-analysis-engine","stock-analysis-engine","Backtest 1000s of minute-by-minute trading algorithms for training AI with automated pricing data from: IEX, Tradier and FinViz. Datasets and trading performance automatically published to S3 for building AI training datasets for teaching DNNs how to trade. Runs on Kubernetes and docker-compose. >150 million trading history rows generated from +5000 algorithms. Heads up: Yahoo's Finance API was disabled on 2019-01-03 https:\u002F\u002Fdeveloper.yahoo.com\u002Fyql\u002F","stock-analysis-engine 是一款专为量化交易与人工智能训练设计的开源回测引擎。它旨在解决金融 AI 模型训练中高质量历史数据匮乏及策略验证困难的痛点，能够自动从 IEX Cloud、Tradier 和 FinViz 等权威源获取包括分钟级行情、期权、新闻及财务指标在内的多维数据。\n\n该工具的核心价值在于其强大的自动化能力：支持对数千种交易算法进行高频回测，已生成超过 1.5 亿条历史交易记录，并能将清洗后的数据集与绩效表现自动发布至 S3 存储，直接用于训练深度神经网络（DNN）预测股价走势。在技术架构上，stock-analysis-engine 原生支持 Docker Compose 与 Kubernetes 部署，具备优秀的分布式扩展性，甚至可通过 Metalnetes 在裸金属服务器上并行运行多个实例，轻松应对海量数据处理需求。\n\n这款工具非常适合量化开发者、AI 研究人员以及希望构建自动化交易系统的技术团队使用。对于想要深入探索“如何用 AI 炒股”的极客而言，它提供了一套从数据抓取、策略回测到模型训练的全流程基础设施，让复杂的金融数据分析变得高效且可复现。","Stock Analysis Engine\n=====================\n\nBuild and tune investment algorithms for use with `artificial intelligence (deep neural networks) \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FComparing-3-Deep-Neural-Networks-Trained-to-Predict-a-Stocks-Closing-Price-Using-The-Analysis-Engine.ipynb>`__ with a distributed stack for running backtests using live pricing data on publicly traded companies with automated datafeeds from: `IEX Cloud \u003Chttps:\u002F\u002Fiexcloud.io\u002F>`__, `Tradier \u003Chttps:\u002F\u002Ftradier.com\u002F>`__ and `FinViz \u003Chttps:\u002F\u002Ffinviz.com>`__ (includes: pricing, options, news, dividends, daily, intraday, screeners, statistics, financials, earnings, and more).\n\nKubernetes users please refer to `the Helm guide to get started \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fdeploy_on_kubernetes_using_helm.html>`__ and `Metalnetes for running multiple Analysis Engines at the same time on a bare-metal server \u003Chttps:\u002F\u002Fmetalnetes.readthedocs.io\u002Fen\u002Flatest\u002F#>`__\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002Ftw2wJ6t.png\n\nFetch the Latest Pricing Data\n=============================\n\nSupported fetch methods for getting pricing data:\n\n- Command line using ``fetch`` command\n- `IEX Cloud Fetch API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fiex_api.html#iex-fetch-api-reference>`__\n- `Tradier Fetch API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Ftradier.html#tradier-fetch-api-reference>`__\n- Docker-compose using ``.\u002Fcompose\u002Fstart.sh -c``\n- Kubernetes jobs: `Fetch Intraday \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_intraday_per_minute.yml>`__, `Fetch Daily \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_daily.yml>`__, `Fetch Weekly \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_weekly.yml>`__, or `Fetch from only Tradier \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml>`__\n\nFetch using the Command Line\n----------------------------\n\nHere is a video showing how to fetch the latest pricing data for a ticker using the command line:\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220460.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220460?autoplay=1\n    :alt: Fetch Pricing Data using the Command Line\n\n#.  Clone to ``\u002Fopt\u002Fsa``\n\n    ::\n\n        git clone https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine.git \u002Fopt\u002Fsa\n        cd \u002Fopt\u002Fsa\n\n#.  Create Docker Mounts and Start Redis and Minio\n\n    This will pull `Redis \u003Chttps:\u002F\u002Fhub.docker.com\u002F_\u002Fredis>`__ and `Minio \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fminio\u002Fminio>`__ docker images.\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -a\n\n#.  Fetch All Pricing Data\n\n    #.  `Run through the Getting Started section \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#getting-started>`__\n\n    #.  Fetch pricing data from `IEX Cloud (requires an account and uses on-demand usage pricing) \u003Chttps:\u002F\u002Fiexcloud.io\u002Fcloud-login#\u002Fregister\u002F>`__ and `Tradier (requires an account) \u003Chttps:\u002F\u002Fdeveloper.tradier.com\u002Fgetting_started>`__:\n\n        - Set the **IEX_TOKEN** environment variable to fetch from the IEX Cloud datafeeds:\n\n        ::\n\n            export IEX_TOKEN=YOUR_IEX_TOKEN\n\n        - Set the **TD_TOKEN** environment variable to fetch from the Tradier datafeeds:\n\n        ::\n\n            export TD_TOKEN=YOUR_TRADIER_TOKEN\n\n        - Fetch with:\n\n        ::\n\n            fetch -t SPY\n\n        - Fetch only from **IEX** with **-g iex**:\n\n        ::\n\n            fetch -t SPY -g iex\n            # and fetch from just Tradier with:\n            # fetch -t SPY -g td\n\n        - Fetch previous 30 calendar days of intraday minute pricing data from IEX Cloud\n\n        ::\n\n            backfill-minute-data.sh TICKER\n            # backfill-minute-data.sh SPY\n\n    #.  Please refer to `the documentation for more examples on controlling your pricing request usage (including how to run fetches for intraday, daily and weekly use cases) \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fscripts.html#module-analysis_engine.scripts.fetch_new_stock_datasets>`__\n\n    .. note:: Yahoo `disabled the YQL finance API so fetching pricing data from yahoo is disabled by default \u003Chttps:\u002F\u002Fdeveloper.yahoo.com\u002Fyql\u002F>`__\n\n#.  View the Compressed Pricing Data in Redis\n\n    ::\n\n        redis-cli keys \"SPY_*\"\n        redis-cli get \"\u003Ckey like SPY_2019-01-08_minute>\"\n\nRun Backtests with the Algorithm Runner API\n===========================================\n\nRun a backtest with the latest pricing data:\n\n.. code-block:: python\n\n    import analysis_engine.algo_runner as algo_runner\n    import analysis_engine.plot_trading_history as plot\n    runner = algo_runner.AlgoRunner('SPY')\n    # run the algorithm with the latest 200 minutes:\n    df = runner.latest()\n    print(df[['minute', 'close']].tail(5))\n    plot.plot_trading_history(\n        title=(\n            f'SPY - ${df[\"close\"].iloc[-1]} at: '\n            f'{df[\"minute\"].iloc[-1]}'),\n        df=df)\n    # start a full backtest with:\n    # runner.start()\n\nCheck out the `backtest_with_runner.py script \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fbacktest_with_runner.py>`__ for a command line example of using the `Algorithm Runner API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Falgo_runner.html>`__ to run and plot from an `Algorithm backtest config file \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcfg\u002Fdefault_algo.json>`__.\n\nExtract from Redis API\n======================\n\nOnce fetched, you can extract datasets from the redis cache with:\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract('SPY'))\n\nExtract Latest Minute Pricing for Stocks and Options\n====================================================\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract(\n        'SPY',\n        datasets=['minute', 'tdcalls', 'tdputs']))\n\nExtract Historical Data\n-----------------------\n\nExtract historical data with the ``date`` argument formatted ``YYYY-MM-DD``:\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract(\n        'AAPL',\n        datasets=['minute', 'daily', 'financials', 'earnings', 'dividends'],\n        date='2019-02-15'))\n\nAdditional Extraction APIs\n==========================\n\n- `Extraction API Reference \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fextract.html>`__\n- `IEX Cloud Extraction API Reference \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fiex_api.html#iex-extraction-api-reference>`__\n- `Tradier Extraction API Reference \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Ftradier.html#tradier-extraction-api-reference>`__\n- `Inspect Cached Datasets in Redis for Errors \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Finspect_datasets.html#module-analysis_engine.scripts.inspect_datasets>`__\n\nBackups\n=======\n\nPricing data is automatically compressed in redis and there is an `example Kubernetes job for backing up all stored pricing data to AWS S3 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fbackups\u002Fbackup-to-aws-job.yml>`__.\n\nRunning the Full Stack Locally for Backtesting and Live Trading Analysis\n========================================================================\n\nWhile not required for backtesting, running the full stack is required for running algorithms during a live trading session. Here is a video on how to deploy the full stack locally using docker compose and the commands from the video.\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220487.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220487?autoplay=1\n    :alt: Running the Full Stack Locally for Backtesting and Live Trading Analysis\n\n#.  Start Workers, Backtester, Pricing Data Collection, Jupyter, Redis and Minio\n\n    Now start the rest of the stack with the command below. This will pull the `~3.0 GB stock-analysis-engine docker image \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fjayjohnson\u002Fstock-analysis-engine>`__ and start the workers, backtester, dataset collection and `Jupyter image \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fjayjohnson\u002Fstock-analysis-jupyter>`__. It will start `Redis \u003Chttps:\u002F\u002Fhub.docker.com\u002F_\u002Fredis>`__ and `Minio \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fminio\u002Fminio>`__ if they are not running already.\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh\n\n    .. tip:: Mac OS X users just a note that `there is a known docker compose issue with network_mode: \"host\" \u003Chttps:\u002F\u002Fgithub.com\u002Fdocker\u002Ffor-mac\u002Fissues\u002F1031>`__ so you may have issues trying to connect to your services.\n\n#.  Check the Docker Containers\n\n    ::\n\n        docker ps -a\n\n#.  View for dataset collection logs\n\n    ::\n\n        logs-dataset-collection.sh\n\n#.  Wait for pricing engine logs to stop with ``ctrl+c``\n\n    ::\n\n        logs-workers.sh\n\n#.  Verify Pricing Data is in Redis\n\n    ::\n\n        redis-cli keys \"*\"\n\n#.  Optional - Automating `pricing data collection with the automation-dataset-collection.yml docker compose file \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fautomation-dataset-collection.yml>`__:\n\n    .. note:: Depending on how fast you want to run intraday algorithms, you can use this docker compose job or the `Kubernetes job \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fjob.yml>`__ or the `Fetch from Only Tradier Kubernetes job \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml>`__ to collect the most recent pricing information\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -c\n\nRun a Custom Minute-by-Minute Intraday Algorithm Backtest and Plot the Trading History\n======================================================================================\n\nWith pricing data in redis, you can start running backtests a few ways:\n\n- `Comparing 3 Deep Neural Networks Trained to Predict a Stocks Closing Price in a Jupyter Notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FComparing-3-Deep-Neural-Networks-Trained-to-Predict-a-Stocks-Closing-Price-Using-The-Analysis-Engine.ipynb>`__\n- `Build, run and tune within a Jupyter Notebook and plot the balance vs the stock's closing price while running \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FRun-a-Custom-Trading-Algorithm-Backtest-with-Minute-Timeseries-Pricing-Data.ipynb>`__\n- `Analyze and replay algorithm trading histories stored in s3 with this Jupyter Notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FAnalyze%20Compressed%20Algorithm%20Trading%20Histories%20Stored%20in%20S3.ipynb>`__\n- `Run with the command line backtest tool \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py>`__\n- `Advanced - building a standalone algorithm as a class for running trading analysis \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__\n\nRunning an Algorithm with Live Intraday Pricing Data\n====================================================\n\nHere is a video showing how to run it:\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220498.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220498?autoplay=1\n    :alt: Running an Algorithm with Live Intraday Pricing Data\n\nThe `backtest command line tool \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py>`__ uses an `algorithm config dictionary \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Ftests\u002Falgo_configs\u002Ftest_5_days_ahead.json>`__ to build multiple `Williams %R indicators \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py#L49>`__ into an algorithm with a **10,000.00 USD** starting balance. Once configured, the backtest iterates through each trading dataset and evaluates if it should buy or sell based off the pricing data. After it finishes, the tool will display a chart showing the algorithm's **balance** and the stock's **close price** per minute using matplotlib and seaborn.\n\n::\n\n    # this can take a few minutes to evaluate\n    # as more data is collected\n    # because each day has 390 rows to process\n    bt -t SPY -f \u002Ftmp\u002Fhistory.json\n\n.. note:: The algorithm's **trading history** dataset provides many additional columns to review for tuning indicators and custom buy\u002Fsell rules. To reduce the time spent waiting on an algorithm to finish processing, you can save the entire trading history to disk with the ``-f \u003Csave_to_file>`` argument.\n\nView the Minute Algorithm's Trading History from a File\n=======================================================\n\nOnce the **trading history** is saved to disk, you can open it back up and plot other columns within the dataset with:\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FpH368gy.png\n\n::\n\n    # by default the plot shows\n    # balance vs close per minute\n    plot-history -f \u002Ftmp\u002Fhistory.json\n\nRun a Custom Algorithm and Save the Trading History with just Today's Pricing Data\n==================================================================================\n\nHere's how to run an algorithm during a live trading session. This approach assumes another process or cron is ``fetch-ing`` the pricing data using the engine so the algorithm(s) have access to the latest pricing data:\n\n::\n\n    bt -t SPY -f \u002Ftmp\u002FSPY-history-$(date +\"%Y-%m-%d\").json -j $(date +\"%Y-%m-%d\")\n\n.. note:: Using ``-j \u003CDATE>`` will make the algorithm **jump-to-this-date** before starting any trading. This is helpful for debugging indicators, algorithms, datasets issues, and buy\u002Fsell rules as well.\n\nRun a Backtest using an External Algorithm Module and Config File\n=================================================================\n\nRun an algorithm backtest with a standalone algorithm class contained in a single python module file that can even be outside the repository using a config file on disk:\n\n::\n\n    ticker=SPY\n    config=\u003CCUSTOM_ALGO_CONFIG_DIR>\u002Fminute_algo.json\n    algo_mod=\u003CCUSTOM_ALGO_MODULE_DIR>\u002Fminute_algo.py\n    bt -t ${ticker} -c ${algo_config} -g ${algo_mod}\n\nOr the config can use ``\"algo_path\": \"\u003CPATH_TO_FILE>\"`` to set the path to an external algorithm module file.\n\n::\n\n    bt -t ${ticker} -c ${algo_config}\n\n.. note:: Using a standalone algorithm class must derive from the ``analysis_engine.algo.BaseAlgo`` class\n\nBuilding Your Own Trading Algorithms\n====================================\n\nBeyond running backtests, the included engine supports running many algorithms and fetching data for both live trading or backtesting all at the same time. As you start to use this approach, you will be generating lots of algorithm pricing datasets, history datasets and coming soon performance datasets for AI training. Because algorithm's utilize the same dataset structure, you can share **ready-to-go** datasets with a team and publish them to S3 for kicking off backtests using lambda functions or just archival for disaster recovery.\n\n.. note:: Backtests can use **ready-to-go** datasets out of S3, redis or a file\n\nThe next section looks at how to build an `algorithm-ready datasets from cached pricing data in redis \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#extract-algorithm-ready-datasets>`__.\n\nRun a Local Backtest and Publish Algorithm Trading History to S3\n================================================================\n\n::\n\n    ae -t SPY -p s3:\u002F\u002Falgohistory\u002Falgo_training_SPY.json\n\nRun distributed across the engine workers with ``-w``\n\n::\n\n    ae -w -t SPY -p s3:\u002F\u002Falgohistory\u002Falgo_training_SPY.json\n\nRun a Local Backtest using an Algorithm Config and Extract an Algorithm-Ready Dataset\n=====================================================================================\n\nUse this command to start a local backtest with the included `algorithm config \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Ftests\u002Falgo_configs\u002Ftest_5_days_ahead.json>`__. This backtest will also generate a local algorithm-ready dataset saved to a file once it finishes.\n\n#.  Define common values\n\n    ::\n\n        ticker=SPY\n        algo_config=tests\u002Falgo_configs\u002Ftest_5_days_ahead.json\n        extract_loc=file:\u002Ftmp\u002Falgoready-SPY-latest.json\n        history_loc=file:\u002Ftmp\u002Fhistory-SPY-latest.json\n        load_loc=${extract_loc}\n\nRun Algo with Extraction and History Publishing\n-----------------------------------------------\n\n::\n\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -e ${extract_loc} -p ${history_loc}\n\nProfile Your Algorithm's Code Performance with vprof\n====================================================\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002F1cwDUBC.png\n\nThe pip includes `vprof for profiling an algorithm's performance (cpu, memory, profiler and heat map - not money-related) \u003Chttps:\u002F\u002Fgithub.com\u002Fnvdv\u002Fvprof>`__ which was used to generate the cpu flame graph seen above.\n\nProfile your algorithm's code performance with the following steps:\n\n#.  Start vprof in remote mode in a first terminal\n\n    .. note:: This command will start a webapp on port ``3434``\n\n    ::\n\n        vprof -r -p 3434\n\n#.  Start Profiler in a second terminal\n\n    .. note:: This command pushes data to the webapp in the other terminal listening on port ``3434``\n\n    ::\n\n        vprof -c cm .\u002Fanalysis_engine\u002Fperf\u002Fprofile_algo_runner.py\n\nRun a Local Backtest using an Algorithm Config and an Algorithm-Ready Dataset\n=============================================================================\n\nAfter generating the local algorithm-ready dataset (which can take some time), use this command to run another backtest using the file on disk:\n\n::\n\n    dev_history_loc=file:\u002Ftmp\u002Fdev-history-${ticker}-latest.json\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc}\n\nView Buy and Sell Transactions\n------------------------------\n\n::\n\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc} | grep \"TRADE\"\n\nPlot Trading History Tools\n==========================\n\nPlot Timeseries Trading History with High + Low + Open + Close\n--------------------------------------------------------------\n\n::\n\n    sa -t SPY -H ${dev_history_loc}\n\nRun and Publish Trading Performance Report for a Custom Algorithm\n=================================================================\n\nThis will run a backtest over the past 60 days in order and run the `standalone algorithm as a class example \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__. Once done it will publish the trading performance report to a file or minio (s3).\n\nWrite the Trading Performance Report to a Local File\n----------------------------------------------------\n\n::\n\n    run-algo-report-to-file.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n    # run-algo-report-to-file.sh -t \u003CTICKER> -b \u003CNUM_DAYS_BACK> -a \u003CCUSTOM_ALGO_MODULE>\n    # run on specific date ranges with:\n    # -s \u003Cstart date YYYY-MM-DD> -n \u003Cend date YYYY-MM-DD>\n\nWrite the Trading Performance Report to Minio (s3)\n--------------------------------------------------\n\n::\n\n    run-algo-report-to-s3.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\nRun and Publish Trading History for a Custom Algorithm\n======================================================\n\nThis will run a full backtest across the past 60 days in order and run the `example algorithm \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__. Once done it will publish the trading history to a file or minio (s3).\n\nWrite the Trading History to a Local File\n-----------------------------------------\n\n::\n\n    run-algo-history-to-file.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\nWrite the Trading History to Minio (s3)\n---------------------------------------\n\n::\n\n    run-algo-history-to-s3.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\nDeveloping on AWS\n=================\n\nIf you are comfortable with AWS S3 usage charges, then you can run just with a redis server to develop and tune algorithms. This works for teams and for archiving datasets for disaster recovery.\n\nEnvironment Variables\n---------------------\n\nExport these based off your AWS IAM credentials and S3 endpoint.\n\n::\n\n    export AWS_ACCESS_KEY_ID=\"ACCESS\"\n    export AWS_SECRET_ACCESS_KEY=\"SECRET\"\n    export S3_ADDRESS=s3.us-east-1.amazonaws.com\n\nExtract and Publish to AWS S3\n=============================\n\n::\n\n    .\u002Ftools\u002Fbackup-datasets-on-s3.sh -t TICKER -q YOUR_BUCKET -k ${S3_ADDRESS} -r localhost:6379\n\nPublish to Custom AWS S3 Bucket and Key\n=======================================\n\n::\n\n    extract_loc=s3:\u002F\u002FYOUR_BUCKET\u002FTICKER-latest.json\n    .\u002Ftools\u002Fbackup-datasets-on-s3.sh -t TICKER -e ${extract_loc} -r localhost:6379\n\nBacktest a Custom Algorithm with a Dataset on AWS S3\n====================================================\n\n::\n\n    backtest_loc=s3:\u002F\u002FYOUR_BUCKET\u002FTICKER-latest.json\n    custom_algo_module=\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n    sa -t TICKER -a ${S3_ADDRESS} -r localhost:6379 -b ${backtest_loc} -g ${custom_algo_module}\n\nFetching New Pricing Tradier Every Minute with Kubernetes\n=========================================================\n\nIf you want to fetch and append new option pricing data from `Tradier \u003Chttps:\u002F\u002Fdeveloper.tradier.com\u002Fgetting_started>`__, you can use the included kubernetes job with a cron to pull new data every minute:\n\n::\n\n    kubectl -f apply \u002Fopt\u002Fsa\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml\n\nRun a Distributed 60-day Backtest on SPY and Publish the Trading Report, Trading History and Algorithm-Ready Dataset to S3\n==========================================================================================================================\n\nPublish backtests and live trading algorithms to the engine's workers for running many algorithms at the same time. Once done, the algorithm will publish results to s3, redis or a local file. By default, the included example below publishes all datasets into minio (s3) where they can be downloaded for offline backtests or restored back into redis.\n\n.. note:: Running distributed algorithmic workloads requires redis, minio, and the engine running\n\n::\n\n    num_days_back=60\n    .\u002Ftools\u002Frun-algo-with-publishing.sh -t SPY -b ${num_days_back} -w\n\nRun a Local 60-day Backtest on SPY and Publish Trading Report, Trading History and Algorithm-Ready Dataset to S3\n================================================================================================================\n\n::\n\n    num_days_back=60\n    .\u002Ftools\u002Frun-algo-with-publishing.sh -t SPY -b ${num_days_back}\n\nOr manually with:\n\n::\n\n    ticker=SPY\n    num_days_back=60\n    use_date=$(date +\"%Y-%m-%d\")\n    ds_id=$(uuidgen | sed -e 's\u002F-\u002F\u002Fg')\n    ticker_dataset=\"${ticker}-${use_date}_${ds_id}.json\"\n    echo \"creating ${ticker} dataset: ${ticker_dataset}\"\n    extract_loc=\"s3:\u002F\u002Falgoready\u002F${ticker_dataset}\"\n    history_loc=\"s3:\u002F\u002Falgohistory\u002F${ticker_dataset}\"\n    report_loc=\"s3:\u002F\u002Falgoreport\u002F${ticker_dataset}\"\n    backtest_loc=\"s3:\u002F\u002Falgoready\u002F${ticker_dataset}\"  # same as the extract_loc\n    processed_loc=\"s3:\u002F\u002Falgoprocessed\u002F${ticker_dataset}\"  # archive it when done\n    start_date=$(date --date=\"${num_days_back} day ago\" +\"%Y-%m-%d\")\n    echo \"\"\n    echo \"extracting algorithm-ready dataset: ${extract_loc}\"\n    echo \"sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date}\"\n    sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date}\n    echo \"\"\n    echo \"running algo with: ${backtest_loc}\"\n    echo \"sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date}\"\n    sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date}\n\nJupyter on Kubernetes\n=====================\n\nThis command runs Jupyter on an `AntiNex Kubernetes cluster \u003Chttps:\u002F\u002Fdeploy-to-kubernetes.readthedocs.io\u002Fen\u002Flatest\u002F>`__\n\n::\n\n    .\u002Fk8\u002Fjupyter\u002Frun.sh ceph dev\n\nKubernetes - Analyze and Tune Algorithms from a Trading History\n===============================================================\n\nWith the Analysis Engine's Jupyter instance deployed you can `tune algorithms from a trading history using this notebook \u003Chttps:\u002F\u002Faejupyter.example.com\u002Fnotebooks\u002FAnalyze%20Compressed%20Algorithm%20Trading%20Histories%20Stored%20in%20S3.ipynb>`__.\n\nKubernetes Job - Export SPY Datasets and Publish to Minio\n=========================================================\n\nManually run with an ``ssh-eng`` alias:\n\n::\n\n    function ssheng() {\n        pod_name=$(kubectl get po | grep ae-engine | grep Running |tail -1 | awk '{print $1}')\n        echo \"logging into ${pod_name}\"\n        kubectl exec -it ${pod_name} bash\n    }\n    ssheng\n    # once inside the container on kubernetes\n    source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n    sa -a minio-service:9000 -r redis-master:6379 -e s3:\u002F\u002Fbackups\u002FSPY-$(date +\"%Y-%m-%d\") -t SPY\n\nView Algorithm-Ready Datasets\n-----------------------------\n\nWith the AWS cli configured you can view available algorithm-ready datasets in your minio (s3) bucket with the command:\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgoready\n\nView Trading History Datasets\n-----------------------------\n\nWith the AWS cli configured you can view available trading history datasets in your minio (s3) bucket with the command:\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgohistory\n\nView Trading History Datasets\n-----------------------------\n\nWith the AWS cli configured you can view available trading performance report datasets in your minio (s3) bucket with the command:\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgoreport\n\nAdvanced - Running Algorithm Backtests Offline\n==============================================\n\nWith `extracted Algorithm-Ready datasets in minio (s3), redis or a file \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#extract-algorithm-ready-datasets>`__ you can develop and tune your own algorithms offline without having redis, minio, the analysis engine, or jupyter running locally.\n\nRun a Offline Custom Algorithm Backtest with an Algorithm-Ready File\n--------------------------------------------------------------------\n\n::\n\n    # extract with:\n    sa -t SPY -e file:\u002Ftmp\u002FSPY-latest.json\n    sa -t SPY -b file:\u002Ftmp\u002FSPY-latest.json -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\nRun the Intraday Minute-by-Minute Algorithm and Publish the Algorithm-Ready Dataset to S3\n-----------------------------------------------------------------------------------------\n\nRun the `included standalone algorithm \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__ with the latest pricing datasets use:\n\n::\n\n    sa -t SPY -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py -e s3:\u002F\u002Falgoready\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nAnd to debug an algorithm's historical trading performance add the ``-d`` debug flag:\n\n::\n\n    sa -d -t SPY -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py -e s3:\u002F\u002Falgoready\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nExtract Algorithm-Ready Datasets\n================================\n\nWith pricing data cached in redis, you can extract algorithm-ready datasets and save them to a local file for offline historical backtesting analysis. This also serves as a local backup where all cached data for a single ticker is in a single local file.\n\nExtract an Algorithm-Ready Dataset from Redis and Save it to a File\n-------------------------------------------------------------------\n\n::\n\n    sa -t SPY -e ~\u002FSPY-latest.json\n\nCreate a Daily Backup\n---------------------\n\n::\n\n    sa -t SPY -e ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nValidate the Daily Backup by Examining the Dataset File\n-------------------------------------------------------\n\n::\n\n    sa -t SPY -l ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nValidate the Daily Backup by Examining the Dataset File\n-------------------------------------------------------\n\n::\n\n    sa -t SPY -l ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nRestore Backup to Redis\n-----------------------\n\nUse this command to cache missing pricing datasets so algorithms have the correct data ready-to-go before making buy and sell predictions.\n\n.. note:: By default, this command will not overwrite existing datasets in redis. It was built as a tool for merging redis pricing datasets after a VM restarted and pricing data was missing from the past few days (gaps in pricing data is bad for algorithms).\n\n::\n\n    sa -t SPY -L ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\nFetch\n-----\n\nWith redis and minio running (``.\u002Fcompose\u002Fstart.sh``), you can fetch, cache, archive and return all of the newest datasets for tickers:\n\n.. code-block:: python\n\n    from analysis_engine.fetch import fetch\n    d = fetch(ticker='SPY')\n    for k in d['SPY']:\n        print(f'dataset key: {k}\\nvalue {d[\"SPY\"][k]}\\n')\n\nBackfill Historical Minute Data from IEX Cloud\n==============================================\n\n.. note:: `IEX Cloud supports pulling from 30 days before today \u003Chttps:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices>`__\n\n::\n\n    fetch -t TICKER -F PAST_DATE -g iex_min\n    # example:\n    # fetch -t SPY -F 2019-02-07 -g iex_min\n\nPlease refer to the `Stock Analysis Intro Extracting Datasets Jupyter Notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Extracting-Datasets.ipynb>`__ for the latest usage examples.\n\n.. list-table::\n   :header-rows: 1\n\n   * - `Build \u003Chttps:\u002F\u002Ftravis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine>`__\n   * - .. image:: https:\u002F\u002Fapi.travis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine.svg\n           :alt: Travis Tests\n           :target: https:\u002F\u002Ftravis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine\n\nGetting Started\n===============\n\nThis section outlines how to get the Stock Analysis stack running locally with:\n\n- Redis\n- Minio (S3)\n- Stock Analysis engine\n- Jupyter\n\nFor background, the stack provides a data pipeline that automatically archives pricing data in `minio (s3) \u003Chttps:\u002F\u002Fminio.io>`__ and caches pricing data in redis. Once cached or archived, custom algorithms can use the pricing information to determine buy or sell conditions and track internal trading performance across historical backtests.\n\nFrom a technical perspective, the engine uses `Celery workers to process heavyweight, asynchronous tasks \u003Chttp:\u002F\u002Fwww.celeryproject.org\u002F>`__ and scales horizontally `with support for many transports and backends depending on where you need to run it \u003Chttps:\u002F\u002Fgithub.com\u002Fcelery\u002Fcelery#transports-and-backends>`__. The stack deploys with `Kubernetes \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#running-on-kubernetes>`__ or docker compose and `supports publishing trading alerts to Slack \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Publishing-to-Slack.ipynb>`__.\n\nWith the stack already running, please refer to the `Intro Stock Analysis using Jupyter Notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb>`__ for more getting started examples.\n\nSetting up Your Tradier Account with Docker Compose\n===================================================\n\nPlease set your Tradier account token in the docker environment files before starting the stack:\n\n::\n\n    grep -r SETYOURTRADIERTOKENHERE compose\u002F*\n    compose\u002Fenvs\u002Fbacktester.env:TD_TOKEN=SETYOURTRADIERTOKENHERE\n    compose\u002Fenvs\u002Fworkers.env:TD_TOKEN=SETYOURTRADIERTOKENHER\n\nPlease export the variable for developing locally:\n\n::\n\n    export TD_TOKEN=\u003CTRADIER_ACCOUNT_TOKEN>\n\n.. note:: Please restart the stack with ``.\u002Fcompose\u002Fstop.sh`` then ``.\u002Fcompose\u002Fstart.sh`` after setting the Tradier token environment variable\n\n#.  Start Redis and Minio\n\n    .. note:: The Redis and Minio container are set up to save data to ``\u002Fdata`` so files can survive a restart\u002Freboot. On Mac OS X, please make sure to add ``\u002Fdata`` (and ``\u002Fdata\u002Fsa\u002Fnotebooks`` for Jupyter notebooks) on the Docker Preferences -> File Sharing tab and let the docker daemon restart before trying to start the containers. If not, you will likely see errors like:\n\n       ::\n\n            ERROR: for minio  Cannot start service minio:\n            b'Mounts denied: \\r\\nThe path \u002Fdata\u002Fminio\u002Fdata\\r\\nis not shared from OS X\n\n        Here is the command to manully creaate the shared volume directories:\n\n        ::\n\n            sudo mkdir -p -m 777 \u002Fdata\u002Fredis\u002Fdata \u002Fdata\u002Fminio\u002Fdata \u002Fdata\u002Fsa\u002Fnotebooks\u002Fdev \u002Fdata\u002Fregistry\u002Fauth \u002Fdata\u002Fregistry\u002Fdata\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh\n\n#.  Verify Redis and Minio are Running\n\n    ::\n\n        docker ps | grep -E \"redis|minio\"\n\nRunning on Ubuntu and CentOS\n============================\n\n#.  Install Packages\n\n    Ubuntu\n\n    ::\n\n        sudo apt-get install make cmake gcc python3-distutils python3-tk python3 python3-apport python3-certifi python3-dev python3-pip python3-venv python3.6 redis-tools virtualenv libcurl4-openssl-dev libssl-dev\n\n    CentOS 7\n\n    ::\n\n        sudo yum install cmake gcc gcc-c++ make tkinter curl-devel make cmake python-devel python-setuptools python-pip python-virtualenv redis python36u-libs python36u-devel python36u-pip python36u-tkinter python36u-setuptools python36u openssl-devel\n\n#.  Install TA-Lib\n\n    Follow the `TA-Lib install guide \u003Chttps:\u002F\u002Fmrjbq7.github.io\u002Fta-lib\u002Finstall.html>`__ or use the included install tool as root:\n\n    ::\n\n        sudo su\n        \u002Fopt\u002Fsa\u002Ftools\u002Flinux-install-talib.sh\n        exit\n\n#.  Create and Load Python 3 Virtual Environment\n\n    ::\n\n        virtualenv -p python3 \u002Fopt\u002Fvenv\n        source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n        pip install --upgrade pip setuptools\n\n#.  Install Analysis Pip\n\n    ::\n\n        pip install -e .\n\n\n#.  Verify Pip installed\n\n    ::\n\n        pip list | grep stock-analysis-engine\n\nRunning on Mac OS X\n===================\n\n#.  Download Python 3.6\n\n    .. note:: Python 3.7 is not supported by celery so please ensure it is python 3.6\n\n    https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Fmac-osx\u002F\n\n#.  Install Packages\n\n    ::\n\n        brew install openssl pyenv-virtualenv redis freetype pkg-config gcc ta-lib\n\n    .. note:: Mac OS X users just a note ``keras``, ``tensorflow`` and ``h5py`` installs have not been debugged yet. Please let us know if you have issues setting up your environment. We likely have not hit the issue yet.\n\n#.  Create and Load Python 3 Virtual Environment\n\n    ::\n\n        python3 -m venv \u002Fopt\u002Fvenv\n        source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n        pip install --upgrade pip setuptools\n\n#.  Install Certs\n\n    After hitting ssl verify errors, I found `this stack overflow \u003Chttps:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F42098126\u002Fmac-osx-python-ssl-sslerror-ssl-certificate-verify-failed-certificate-verify>`__ which shows there's an additional step for setting up python 3.6:\n\n    ::\n\n        \u002FApplications\u002FPython\\ 3.6\u002FInstall\\ Certificates.command\n\n#.  Install PyCurl with OpenSSL\n\n    ::\n\n        PYCURL_SSL_LIBRARY=openssl LDFLAGS=\"-L\u002Fusr\u002Flocal\u002Fopt\u002Fopenssl\u002Flib\" CPPFLAGS=\"-I\u002Fusr\u002Flocal\u002Fopt\u002Fopenssl\u002Finclude\" pip install --no-cache-dir pycurl\n\n#.  Install Analysis Pip\n\n    ::\n\n        pip install --upgrade pip setuptools\n        pip install -e .\n\n#.  Verify Pip installed\n\n    ::\n\n        pip list | grep stock-analysis-engine\n\nStart Workers\n=============\n\n::\n\n    .\u002Fstart-workers.sh\n\nGet and Publish Pricing data\n============================\n\nPlease refer to the lastest API docs in the repo:\n\nhttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fapi_requests.py\n\nFetch New Stock Datasets\n========================\n\nRun the ticker analysis using the `.\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__:\n\nCollect all datasets for a Ticker or Symbol\n-------------------------------------------\n\nCollect all datasets for the ticker **SPY**:\n\n::\n\n    fetch -t SPY\n\n.. note:: This requires the following services are listening on:\n\n    - redis ``localhost:6379``\n    - minio ``localhost:9000``\n\nView the Engine Worker Logs\n---------------------------\n\n::\n\n    docker logs ae-workers\n\nRunning Inside Docker Containers\n--------------------------------\n\nIf you are using an engine that is running inside a docker container, then ``localhost`` is probably not the correct network hostname for finding ``redis`` and ``minio``.\n\nPlease set these values as needed to publish and archive the dataset artifacts if you are using the `integration \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fintegration.yml>`__ or `notebook integration \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fnotebook-integration.yml>`__ docker compose files for deploying the analysis engine stack:\n\n::\n\n    fetch -t SPY -a 0.0.0.0:9000 -r 0.0.0.0:6379\n\n.. warning:: It is not recommended sharing the same Redis server with multiple engine workers from inside docker containers and outside docker. This is because the ``REDIS_ADDRESS`` and ``S3_ADDRESS`` can only be one string value at the moment. So if a job is picked up by the wrong engine (which cannot connect to the correct Redis and Minio), then it can lead to data not being cached or archived correctly and show up as connectivity failures.\n\nDetailed Usage Example\n----------------------\n\nThe `fetch_new_stock_datasets.py script \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__ supports many parameters. Here is how to set it up if you have custom ``redis`` and ``minio`` deployments like on kubernetes as `minio-service:9000 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002F7323ad4007b44eaa511d448c8eb500cec9fe3848\u002Fk8\u002Fengine\u002Fdeployment.yml#L80-L81>`__ and `redis-master:6379 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002F7323ad4007b44eaa511d448c8eb500cec9fe3848\u002Fk8\u002Fengine\u002Fdeployment.yml#L88-L89>`__:\n\n- S3 authentication (``-k`` and ``-s``)\n- S3 endpoint (``-a``)\n- Redis endoint (``-r``)\n- Custom S3 Key and Redis Key Name (``-n``)\n\n::\n\n    fetch -t SPY -g all -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n SPY_demo -P 1 -N 1 -O 1 -U 1 -R 1\n\nUsage\n-----\n\nPlease refer to the `fetch_new_stock_datasets.py script \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__ for the latest supported usage if some of these are out of date:\n\n::\n\n    fetch -h\n    2019-02-11 01:55:33,791 - fetch - INFO - start - fetch_new_stock_datasets\n    usage: fetch_new_stock_datasets.py [-h] [-t TICKER] [-g FETCH_MODE]\n                                    [-i TICKER_ID] [-e EXP_DATE_STR]\n                                    [-l LOG_CONFIG_PATH] [-b BROKER_URL]\n                                    [-B BACKEND_URL] [-k S3_ACCESS_KEY]\n                                    [-s S3_SECRET_KEY] [-a S3_ADDRESS]\n                                    [-S S3_SECURE] [-u S3_BUCKET_NAME]\n                                    [-G S3_REGION_NAME] [-p REDIS_PASSWORD]\n                                    [-r REDIS_ADDRESS] [-n KEYNAME]\n                                    [-m REDIS_DB] [-x REDIS_EXPIRE] [-z STRIKE]\n                                    [-c CONTRACT_TYPE] [-P GET_PRICING]\n                                    [-N GET_NEWS] [-O GET_OPTIONS]\n                                    [-U S3_ENABLED] [-R REDIS_ENABLED]\n                                    [-A ANALYSIS_TYPE] [-L URLS] [-Z] [-d]\n\n    Download and store the latest stock pricing, news, and options chain data and\n    store it in Minio (S3) and Redis. Also includes support for getting FinViz\n    screener tickers\n\n    optional arguments:\n    -h, --help          show this help message and exit\n    -t TICKER           ticker\n    -g FETCH_MODE       optional - fetch mode: initial = default fetch from\n                        initial data feeds (IEX and Tradier), intra = fetch\n                        intraday from IEX and Tradier, daily = fetch daily from\n                        IEX, weekly = fetch weekly from IEX, all = fetch from\n                        all data feeds, td = fetch from Tradier feeds only, iex\n                        = fetch from IEX Cloud feeds only, iex_min = fetch IEX\n                        Cloud intraday per-minute feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices iex_day\n                        = fetch IEX Cloud daily feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices\n                        iex_quote = fetch IEX Cloud quotes feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#quote iex_stats = fetch\n                        IEX Cloud key stats feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#key-stats iex_peers =\n                        fetch from just IEX Cloud peers feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#peers iex_news = fetch IEX\n                        Cloud news feed https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#news\n                        iex_fin = fetch IEX Cloud financials\n                        feedhttps:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#financials iex_earn =\n                        fetch from just IEX Cloud earnings feeed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#earnings iex_div = fetch\n                        from just IEX Cloud dividends\n                        feedhttps:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#dividends iex_comp =\n                        fetch from just IEX Cloud company feed\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#company\n    -i TICKER_ID        optional - ticker id not used without a database\n    -e EXP_DATE_STR     optional - options expiration date\n    -l LOG_CONFIG_PATH  optional - path to the log config file\n    -b BROKER_URL       optional - broker url for Celery\n    -B BACKEND_URL      optional - backend url for Celery\n    -k S3_ACCESS_KEY    optional - s3 access key\n    -s S3_SECRET_KEY    optional - s3 secret key\n    -a S3_ADDRESS       optional - s3 address format: \u003Chost:port>\n    -S S3_SECURE        optional - s3 ssl or not\n    -u S3_BUCKET_NAME   optional - s3 bucket name\n    -G S3_REGION_NAME   optional - s3 region name\n    -p REDIS_PASSWORD   optional - redis_password\n    -r REDIS_ADDRESS    optional - redis_address format: \u003Chost:port>\n    -n KEYNAME          optional - redis and s3 key name\n    -m REDIS_DB         optional - redis database number (0 by default)\n    -x REDIS_EXPIRE     optional - redis expiration in seconds\n    -z STRIKE           optional - strike price\n    -c CONTRACT_TYPE    optional - contract type \"C\" for calls \"P\" for puts\n    -P GET_PRICING      optional - get pricing data if \"1\" or \"0\" disabled\n    -N GET_NEWS         optional - get news data if \"1\" or \"0\" disabled\n    -O GET_OPTIONS      optional - get options data if \"1\" or \"0\" disabled\n    -U S3_ENABLED       optional - s3 enabled for publishing if \"1\" or \"0\" is\n                        disabled\n    -R REDIS_ENABLED    optional - redis enabled for publishing if \"1\" or \"0\" is\n                        disabled\n    -A ANALYSIS_TYPE    optional - run an analysis supported modes: scn\n    -L URLS             optional - screener urls to pull tickers for analysis\n    -Z                  disable run without an engine for local testing and\n                        demos\n    -d                  debug\n\nRun FinViz Screener-driven Analysis\n===================================\n\nThis is a work in progress, but the screener-driven workflow is:\n\n#.  Convert FinViz screeners into a list of tickers\n    and a ``pandas.DataFrames`` from each ticker's html row\n#.  Build unique list of tickers\n#.  Pull datasets for each ticker\n#.  Run sale-side processing - coming soon\n#.  Run buy-side processing - coming soon\n#.  Issue alerts to slack - coming soon\n\nHere is how to run an analysis on all unique tickers found in two FinViz screener urls:\n\nhttps:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4\nand\nhttps:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4\n\n::\n\n    fetch -A scn -L 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4'\n\nRun Publish from an Existing S3 Key to Redis\n============================================\n\n#.  Upload Integration Test Key to S3\n\n    ::\n\n        export INT_TESTS=1\n        python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload\n\n#.  Confirm the Integration Test Key is in S3\n\n    http:\u002F\u002Flocalhost:9000\u002Fminio\u002Fintegration-tests\u002F\n\n#.  Run an analysis with an existing S3 key using `.\u002Fanalysis_engine\u002Fscripts\u002Fpublish_from_s3_to_redis.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fpublish_from_s3_to_redis.py>`__\n\n    ::\n\n        publish_from_s3_to_redis.py -t SPY -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1\n\n#.  Confirm the Key is now in Redis\n\n    ::\n\n        .\u002Ftools\u002Fredis-cli.sh\n        127.0.0.1:6379> keys *\n        keys *\n        1) \"SPY_demo_daily\"\n        2) \"SPY_demo_minute\"\n        3) \"SPY_demo_company\"\n        4) \"integration-test-v1\"\n        5) \"SPY_demo_stats\"\n        6) \"SPY_demo\"\n        7) \"SPY_demo_quote\"\n        8) \"SPY_demo_peers\"\n        9) \"SPY_demo_dividends\"\n        10) \"SPY_demo_news1\"\n        11) \"SPY_demo_news\"\n        12) \"SPY_demo_options\"\n        13) \"SPY_demo_pricing\"\n        127.0.0.1:6379>\n\nRun Aggregate and then Publish data for a Ticker from S3 to Redis\n=================================================================\n\n#.  Run an analysis with an existing S3 key using `.\u002Fanalysis_engine\u002Fscripts\u002Fpublish_ticker_aggregate_from_s3.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fpublish_ticker_aggregate_from_s3.py>`__\n\n    ::\n\n        publish_ticker_aggregate_from_s3.py -t SPY -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -u pricing -c compileddatasets\n\n#.  Confirm the aggregated Ticker is now in Redis\n\n    ::\n\n        .\u002Ftools\u002Fredis-cli.sh\n        127.0.0.1:6379> keys *latest*\n        1) \"SPY_latest\"\n        127.0.0.1:6379>\n\nView Archives in S3 - Minio\n===========================\n\nHere's a screenshot showing the stock market dataset archives created while running on the `3-node Kubernetes cluster for distributed AI predictions \u003Chttps:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fdeploy-to-kubernetes#deploying-a-distributed-ai-stack-to-kubernetes-on-centos>`__\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FwDyPKAp.png\n\nhttp:\u002F\u002Flocalhost:9000\u002Fminio\u002Fpricing\u002F\n\nLogin\n\n- username: ``trexaccesskey``\n- password: ``trex123321``\n\nUsing the AWS CLI to List the Pricing Bucket\n\nPlease refer to the official steps for using the ``awscli`` pip with minio:\n\nhttps:\u002F\u002Fdocs.minio.io\u002Fdocs\u002Faws-cli-with-minio.html\n\n#.  Export Credentials\n\n    ::\n\n        export AWS_SECRET_ACCESS_KEY=trex123321\n        export AWS_ACCESS_KEY_ID=trexaccesskey\n\n#.  List Buckets\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls\n        2018-10-02 22:24:06 company\n        2018-10-02 22:24:02 daily\n        2018-10-02 22:24:06 dividends\n        2018-10-02 22:33:15 integration-tests\n        2018-10-02 22:24:03 minute\n        2018-10-02 22:24:05 news\n        2018-10-02 22:24:04 peers\n        2018-10-02 22:24:06 pricing\n        2018-10-02 22:24:04 stats\n        2018-10-02 22:24:04 quote\n\n#.  List Pricing Bucket Contents\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Fpricing\n\n#.  Get the Latest SPY Pricing Key\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Fpricing | grep -i spy_demo\n        SPY_demo\n\nView Caches in Redis\n====================\n\n::\n\n    .\u002Ftools\u002Fredis-cli.sh\n    127.0.0.1:6379> keys *\n    1) \"SPY_demo\"\n\nJupyter\n=======\n\nYou can run the Jupyter notebooks by starting the `notebook-integration.yml stack \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fnotebook-integration.yml>`__ with the command:\n\n.. warning:: On Mac OS X, Jupyter does not work with the Analysis Engine at the moment. PR's are welcomed, but we have not figured out how to share the notebooks and access redis and minio with the `known docker compose issue with network_host on Mac OS X \u003Chttps:\u002F\u002Fgithub.com\u002Fdocker\u002Ffor-mac\u002Fissues\u002F1031>`__\n\nFor Linux users, the Jupyter container hosts the `Stock Analysis Intro notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb>`__ at the url (default login password is ``admin``):\n\nhttp:\u002F\u002Flocalhost:8888\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb\n\nJupyter Presentations with RISE\n===============================\n\nThe docker container comes with `RISE installed \u003Chttps:\u002F\u002Fgithub.com\u002Fdamianavila\u002FRISE>`__ for running notebook presentations from a browser. Here's the button on the notebook for starting the web presentation:\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FIDMW2Oc.png\n\nDistributed Automation with Docker\n==================================\n\n.. note:: Automation requires the integration stack running (redis + minio + engine) and docker-compose.\n\nDataset Collection\n==================\n\nStart automated dataset collection with docker compose:\n\n::\n\n    .\u002Fcompose\u002Fstart.sh -c\n\nDatasets in Redis\n=================\n\nAfter running the dataset collection container, the datasets should be auto-cached in Minio (http:\u002F\u002Flocalhost:9000\u002Fminio\u002Fpricing\u002F) and Redis:\n\n::\n\n    .\u002Ftools\u002Fredis-cli.sh\n    127.0.0.1:6379> keys *\n\nPublishing to Slack\n===================\n\nPlease refer to the `Publish Stock Alerts to Slack Jupyter Notebook \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Publishing-to-Slack.ipynb>`__ for the latest usage examples.\n\nPublish FinViz Screener Tickers to Slack\n----------------------------------------\n\nHere is sample code for trying out the Slack integration.\n\n.. code-block:: python\n\n    import analysis_engine.finviz.fetch_api as fv\n    from analysis_engine.send_to_slack import post_df\n    # simple NYSE Dow Jones Index Financials with a P\u002FE above 5 screener url\n    url = 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=exch_nyse,fa_pe_o5,idx_dji,sec_financial&ft=4'\n    res = fv.fetch_tickers_from_screener(url=url)\n    df = res['rec']['data']\n\n    # please make sure the SLACK_WEBHOOK environment variable is set correctly:\n    post_df(\n        df=df[SLACK_FINVIZ_COLUMNS],\n        columns=SLACK_FINVIZ_COLUMNS)\n\nRunning on Kubernetes\n=====================\n\nKubernetes Deployments - Engine\n-------------------------------\n\nDeploy the engine with:\n\n::\n\n    kubectl apply -f .\u002Fk8\u002Fengine\u002Fdeployment.yml\n\nKubernetes Job - Dataset Collection\n-----------------------------------\n\nStart the dataset collection job with:\n\n::\n\n    kubectl apply -f .\u002Fk8\u002Fdatasets\u002Fjob.yml\n\nKubernetes Deployments - Jupyter\n--------------------------------\n\nDeploy Jupyter to a Kubernetes cluster with:\n\n::\n\n    .\u002Fk8\u002Fjupyter\u002Frun.sh\n\nKubernetes with a Private Docker Registry\n=========================================\n\nYou can deploy a private docker registry that can be used to pull images from outside a kubernetes cluster with the following steps:\n\n#.  Deploy Docker Registry\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -r\n\n#.  Configure Kubernetes hosts and other docker daemons for insecure registries\n\n    ::\n\n        cat \u002Fetc\u002Fdocker\u002Fdaemon.json\n        {\n            \"insecure-registries\": [\n                \"\u003Cpublic ip address\u002Ffqdn for host running the registry container>:5000\"\n            ]\n        }\n\n#.  Restart all Docker daemons\n\n    ::\n\n        sudo systemctl restart docker\n\n#.  Login to Docker Registry from all Kubernetes hosts and other daemons that need access to the registry\n\n    .. note:: Change the default registry password by either changing the ``.\u002Fcompose\u002Fstart.sh`` file that uses ``trex`` and ``123321`` as the credentials or you can edit the volume mounted file ``\u002Fdata\u002Fregistry\u002Fauth\u002Fhtpasswd``. Here is how to find the registry's default login set up:\n\n        ::\n\n            grep docker compose\u002Fstart.sh  | grep htpass\n\n    ::\n\n        docker login \u003Cpublic ip address\u002Ffqdn for host running the registry container>:5000\n\n#.  Setup Kubernetes Secrets for All Credentials\n\n    Set each of the fields according to your own buckets, docker registry and Tradier account token:\n\n    ::\n\n        cat \u002Fopt\u002Fsa\u002Fk8\u002Fsecrets\u002Fsecrets.yml | grep SETYOUR\n        aws_access_key_id: SETYOURENCODEDAWSACCESSKEYID\n        aws_secret_access_key: SETYOURENCODEDAWSSECRETACCESSKEY\n        .dockerconfigjson: SETYOURDOCKERCREDS\n        td_token: SETYOURTDTOKEN\n\n#.  Deploy Kubernetes Secrets\n\n    ::\n\n        kubectl apply -f \u002Fopt\u002Fsa\u002Fk8\u002Fsecrets\u002Fsecrets.yml\n\n#.  Confirm Kubernetes Secrets are Deployed\n\n    ::\n\n        kubectl get secrets ae.docker.creds\n        NAME              TYPE                             DATA   AGE\n        ae.docker.creds   kubernetes.io\u002Fdockerconfigjson   1      4d1h\n\n    ::\n\n        kubectl get secrets | grep \"ae\\.\"\n        ae.docker.creds         kubernetes.io\u002Fdockerconfigjson        1      4d1h\n        ae.k8.aws.s3            Opaque                                3      4d1h\n        ae.k8.minio.s3          Opaque                                3      4d1h\n        ae.k8.tradier           Opaque                                4      4d1h\n\n#.  Configure Kubernetes Deployments for using an External Private Docker Registry\n\n    Add these lines to a Kubernetes deployment yaml file based off your set up:\n\n    ::\n\n        imagePullSecrets:\n        - name: ae.docker.creds\n        containers:\n        - image: \u003Cpublic ip address\u002Ffqdn for host running the registry container>:5000\u002Fmy-own-stock-ae:latest\n          imagePullPolicy: Always\n\n.. tip:: After spending a sad amount of time debugging, please make sure to delete pods before applying new ones that are pulling docker images from an external registry. After running the ``kubectl delete pod \u003Cname>``, you can apply\u002Fcreate the pod to get the latest image running.\n\nTesting\n=======\n\nTo show debug, trace logging please export ``SHARED_LOG_CFG`` to a debug logger json file. To turn on debugging for this library, you can export this variable to the repo's included file with the command:\n\n::\n\n    export SHARED_LOG_CFG=\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Flog\u002Fdebug-logging.json\n\n.. note:: There is a known `pandas issue that logs a warning about _timelex \u003Chttps:\u002F\u002Fgithub.com\u002Fpandas-dev\u002Fpandas\u002Fissues\u002F18141>`__, and it will show as a warning until it is fixed in pandas. Please ignore this warning for now.\n\n   ::\n\n        DeprecationWarning: _timelex is a private class and may break without warning, it will be moved and or renamed in future versions.\n\nRun all\n\n::\n\n    py.test --maxfail=1\n\nRun a test case\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_publish_pricing_data\n\nTest Publishing\n---------------\n\nS3 Upload\n---------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_s3_upload\n\nPublish from S3 to Redis\n------------------------\n\n::\n\n    python -m unittest tests.test_publish_from_s3_to_redis.TestPublishFromS3ToRedis.test_success_publish_from_s3_to_redis\n\nRedis Cache Set\n---------------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_redis_set\n\nPrepare Dataset\n---------------\n\n::\n\n    python -m unittest tests.test_prepare_pricing_dataset.TestPreparePricingDataset.test_prepare_pricing_data_success\n\nTest Algo Saving All Input Datasets to File\n-------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_algo_can_save_all_input_datasets_to_file\n\nEnd-to-End Integration Testing\n==============================\n\nStart all the containers for full end-to-end integration testing with real docker containers with the script:\n\n::\n\n    .\u002Fcompose\u002Fstart.sh -a\n\nVerify Containers are running:\n\n::\n\n    docker ps | grep -E \"stock-analysis|redis|minio\"\n\nStop End-to-End Stack:\n\n::\n\n    .\u002Fcompose\u002Fstop.sh\n    .\u002Fcompose\u002Fstop.sh -s\n\nIntegration UnitTests\n=====================\n\n.. note:: please start redis and minio before running these tests.\n\nPlease enable integration tests\n\n::\n\n    export INT_TESTS=1\n\nRedis\n-----\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_redis_set\n\nS3 Upload\n---------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload\n\n\nPublish from S3 to Redis\n------------------------\n\n::\n\n    python -m unittest tests.test_publish_from_s3_to_redis.TestPublishFromS3ToRedis.test_integration_publish_from_s3_to_redis\n\nIEX Test - Fetching All Datasets\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data\n\nIEX Test - Fetch Daily\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_daily\n\nIEX Test - Fetch Minute\n-----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_minute\n\nIEX Test - Fetch Stats\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_stats\n\nIEX Test - Fetch Peers\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_peers\n\nIEX Test - Fetch News\n---------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_news\n\nIEX Test - Fetch Financials\n---------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_financials\n\nIEX Test - Fetch Earnings\n-------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_earnings\n\nIEX Test - Fetch Dividends\n--------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_dividends\n\nIEX Test - Fetch Company\n------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_company\n\nIEX Test - Fetch Financials Helper\n----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_get_financials_helper\n\nIEX Test - Extract Daily Dataset\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_daily_dataset\n\nIEX Test - Extract Minute Dataset\n---------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_minute_dataset\n\nIEX Test - Extract Quote Dataset\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_quote_dataset\n\nIEX Test - Extract Stats Dataset\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_stats_dataset\n\nIEX Test - Extract Peers Dataset\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_peers_dataset\n\nIEX Test - Extract News Dataset\n-------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_news_dataset\n\nIEX Test - Extract Financials Dataset\n-------------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_financials_dataset\n\nIEX Test - Extract Earnings Dataset\n-----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_earnings_dataset\n\nIEX Test - Extract Dividends Dataset\n------------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_dividends_dataset\n\nIEX Test - Extract Company Dataset\n----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_company_dataset\n\nFinViz Test - Fetch Tickers from Screener URL\n---------------------------------------------\n\n::\n\n    python -m unittest tests.test_finviz_fetch_api.TestFinVizFetchAPI.test_integration_test_fetch_tickers_from_screener\n\nor with code:\n\n.. code-block:: python\n\n    import analysis_engine.finviz.fetch_api as fv\n    url = 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=exch_nyse&ft=4&r=41'\n    res = fv.fetch_tickers_from_screener(url=url)\n    print(res)\n\nAlgorithm Testing\n=================\n\nAlgorithm Test - Input Dataset Publishing to Redis\n--------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_dataset_to_redis\n\nAlgorithm Test - Input Dataset Publishing to File\n-------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_dataset_to_file\n\nAlgorithm Test - Load Dataset From a File\n-----------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_load_from_file\n\nAlgorithm Test - Publish Algorithm-Ready Dataset to S3 and Load from S3\n-----------------------------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_s3_and_load\n\nAlgorithm Test - Publish Algorithm-Ready Dataset to S3 and Load from S3\n-----------------------------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_redis_and_load\n\nAlgorithm Test - Extract Algorithm-Ready Dataset from Redis DB 0 and Load into Redis DB 1\n-----------------------------------------------------------------------------------------\n\nCopying datasets between redis databases is part of the integration tests. Run it with:\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_restore_ready_back_to_redis\n\nAlgorithm Test - Test the Docs Example\n--------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_sample_algo_code_in_docstring\n\nPrepare a Dataset\n=================\n\n::\n\n    ticker=SPY\n    sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo\n\nDebugging\n=========\n\nTest Algos\n----------\n\nThe fastest way to run algos is to specify a 1-day range:\n\n::\n\n    sa -t SPY -s $(date +\"%Y-%m-%d) -n $(date +\"%Y-%m-%d\")\n\nTest Tasks\n----------\n\nMost of the scripts support running without Celery workers. To run without workers in a synchronous mode use the command:\n\n::\n\n    export CELERY_DISABLED=1\n\n::\n\n    ticker=SPY\n    publish_from_s3_to_redis.py -t ${ticker} -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1\n    sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo\n    fetch -t ${ticker} -g all -e 2018-10-19 -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo -P 1 -N 1 -O 1 -U 1 -R 1\n    fetch -A scn -L 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4'\n\nLinting and Other Tools\n-----------------------\n\n#.  Linting\n\n    ::\n\n        flake8 .\n        pycodestyle .\n\n#.  Sphinx Docs\n\n    ::\n\n        cd docs\n        make html\n\n#.  Docker Admin - Pull Latest\n\n    ::\n\n        docker pull jayjohnson\u002Fstock-analysis-jupyter && docker pull jayjohnson\u002Fstock-analysis-engine\n\n#.  Back up Docker Redis Database\n\n    ::\n\n        \u002Fopt\u002Fsa\u002Ftools\u002Fbackup-redis.sh\n\n    View local redis backups with:\n\n    ::\n\n        ls -hlrt \u002Fopt\u002Fsa\u002Ftests\u002Fdatasets\u002Fredis\u002Fredis-0-backup-*.rdb\n\n#.  Export the Kubernetes Redis Cluster's Database to the Local Redis Container\n\n    #.  stop the redis docker container:\n\n        ::\n\n            .\u002Fcompose\u002Fstop.sh\n\n    #.  Archive the previous redis database\n\n        ::\n\n            cp \u002Fdata\u002Fredis\u002Fdata\u002Fdump.rdb \u002Fdata\u002Fredis\u002Fdata\u002Farchive.rdb\n\n    #.  Save the Redis database in the Cluster\n\n        ::\n\n            kubectl exec -it redis-master-0 redis-cli save\n\n    #.  Export the saved redis database file inside the pod to the default docker redis container's local file\n\n        ::\n\n            kubectl cp redis-master-0:\u002Fbitnami\u002Fredis\u002Fdata\u002Fdump.rdb \u002Fdata\u002Fredis\u002Fdata\u002Fdump.rdb\n\n    #.  Restart the stack\n\n        .. note:: Redis takes a few seconds to load all the data into memory so this can take a few seconds\n\n        ::\n\n            .\u002Fcompose\u002Fstart.sh\n\nDeploy Fork Feature Branch to Running Containers\n================================================\n\nWhen developing features that impact multiple containers, you can deploy your own feature branch without redownloading or manually building docker images. With the containers running., you can deploy your own fork's branch as a new image (which are automatically saved as new docker container images).\n\nDeploy a public or private fork into running containers\n-------------------------------------------------------\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh \u003Cgit fork https uri> \u003Coptional - branch name (master by default)> \u003Coptional - fork repo name>\n\nExample:\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fstock-analysis-engine.git timeseries-charts jay\n\nRestore the containers back to the Master\n-----------------------------------------\n\nRestore the container builds back to the ``master`` branch from https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine with:\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine.git master upstream\n\nDeploy Fork Alias\n-----------------\n\nHere's a bashrc alias for quickly building containers from a fork's feature branch:\n\n::\n\n    alias bd='pushd \u002Fopt\u002Fsa >> \u002Fdev\u002Fnull && source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate && \u002Fopt\u002Fsa\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fstock-analysis-engine.git timeseries-charts jay && popd >> \u002Fdev\u002Fnull'\n\nDebug Fetching IEX Data\n-----------------------\n\n::\n\n    ticker=\"SPY\"\n    use_date=$(date +\"%Y-%m-%d\")\n    source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n    exp_date=$(\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fscripts\u002Fprint_next_expiration_date.py)\n    fetch -t ${ticker} -g iex -n ${ticker}_${use_date} -e ${exp_date} -Z\n\nFailed Fetching Tradier Data\n----------------------------\n\nPlease export a valid ``TD_TOKEN`` in your ``compose\u002Fenvs\u002F*.env`` docker compose files if you see the following errors trying to pull pricing data from Tradier:\n\n::\n\n    2019-01-09 00:16:47,148 - analysis_engine.td.fetch_api - INFO - failed to get put with response=\u003CResponse [401]> code=401 text=Invalid Access Token\n    2019-01-09 00:16:47,151 - analysis_engine.td.get_data - CRITICAL - ticker=TSLA-tdputs - ticker=TSLA field=10001 failed fetch_data with ex='date'\n    2019-01-09 00:16:47,151 - analysis_engine.work_tasks.get_new_pricing_data - CRITICAL - ticker=TSLA failed TD ticker=TSLA field=tdputs status=ERR err=ticker=TSLA-tdputs - ticker=TSLA field=10001 failed fetch_data with ex='date'\n\nLicense\n=======\n\nApache 2.0 - Please refer to the LICENSE_ for more details\n\n.. _License: https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002FLICENSE\n\nFAQ\n===\n\nCan I live trade with my algorithms?\n------------------------------------\n\nNot yet. Please reach out for help on how to do this or if you have a platform you like.\n\nCan I publish algorithm trade notifications?\n--------------------------------------------\n\nRight now algorithms only support publishing to a private Slack channel for sharing with a group when an algorithm finds a buy\u002Fsell trade to execute. Reach out if you have a custom chat client app or service you think should be supported.\n\nTerms of Service\n================\n\nData Attribution\n================\n\nThis repository currently uses `Tradier \u003Chttps:\u002F\u002Ftradier.com\u002F>`__ and `IEX \u003Chttps:\u002F\u002Fiextrading.com\u002Fdeveloper\u002Fdocs\u002F>`__ for pricing data. Usage of these feeds require the following agreements in the terms of service.\n\nIEX Cloud\n=========\n\n- Link to `IEX's Terms of Use \u003Chttps:\u002F\u002Fiextrading.com\u002Fapi-exhibit-a>`__\n- `IEX Real-Time Price \u003Chttps:\u002F\u002Fiextrading.com\u002Fdeveloper>`__ is used with this repository\n- IEX Cloud is a data source with the additional data attribution instructions available on https:\u002F\u002Fiextrading.com\u002Fdeveloper\u002Fdocs\u002F#attribution\n\nAdding Celery Tasks\n===================\n\nIf you want to add a new Celery task add the file path to WORKER_TASKS at these locations:\n\n- compose\u002Fenvs\u002Flocal.env\n- compose\u002Fenvs\u002F.env\n- analysis_engine\u002Fwork_tasks\u002Fconsts.py\n\n","股票分析引擎\n=====================\n\n构建并调优投资算法，用于与`人工智能（深度神经网络） \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FComparing-3-Deep-Neural-Networks-Trained-to-Predict-a-Stocks-Closing-Price-Using-The-Analysis-Engine.ipynb>`__ 结合使用，并采用分布式架构，利用来自以下平台的自动化数据源——`IEX Cloud \u003Chttps:\u002F\u002Fiexcloud.io\u002F>`__、`Tradier \u003Chttps:\u002F\u002Ftradier.com\u002F>`__ 和 `FinViz \u003Chttps:\u002F\u002Ffinviz.com>`__——对公开上市公司的实时行情数据进行回测（包括：行情、期权、新闻、股息、日线、盘中、筛选器、统计信息、财务报表、财报等）。\n\nKubernetes 用户请参阅 `Helm 入门指南 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fdeploy_on_kubernetes_using_helm.html>`__ 以及 `Metalnetes 在裸机服务器上同时运行多个分析引擎的方法 \u003Chttps:\u002F\u002Fmetalnetes.readthedocs.io\u002Fen\u002Flatest\u002F#>`__。\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002Ftw2wJ6t.png\n\n获取最新行情数据\n=============================\n\n支持的行情数据获取方式：\n\n- 使用 ``fetch`` 命令行工具\n- `IEX Cloud Fetch API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fiex_api.html#iex-fetch-api-reference>`__\n- `Tradier Fetch API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Ftradier.html#tradier-fetch-api-reference>`__\n- 使用 Docker Compose 的 ``.\u002Fcompose\u002Fstart.sh -c``\n- Kubernetes 作业：`获取盘中数据 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_intraday_per_minute.yml>`__、`获取日线数据 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_daily.yml>`__、`获取周线数据 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_weekly.yml>`__，或 `仅从 Tradier 获取数据 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml>`__\n\n使用命令行获取数据\n----------------------------\n\n以下视频展示了如何使用命令行获取某个股票代码的最新行情数据：\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220460.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220460?autoplay=1\n    :alt: 使用命令行获取行情数据\n\n#.  克隆到 ``\u002Fopt\u002Fsa``\n\n    ::\n\n        git clone https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine.git \u002Fopt\u002Fsa\n        cd \u002Fopt\u002Fsa\n\n#.  创建 Docker 挂载点并启动 Redis 和 Minio\n\n    这将拉取 `Redis \u003Chttps:\u002F\u002Fhub.docker.com\u002F_\u002Fredis>`__ 和 `Minio \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fminio\u002Fminio>`__ 的 Docker 镜像。\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -a\n\n#.  获取所有行情数据\n\n    #.  `按照入门部分操作 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#getting-started>`__\n\n    #.  从 `IEX Cloud（需要账户且按需付费） \u003Chttps:\u002F\u002Fiexcloud.io\u002Fcloud-login#\u002Fregister\u002F>`__ 和 `Tradier（需要账户） \u003Chttps:\u002F\u002Fdeveloper.tradier.com\u002Fgetting_started>`__ 获取行情数据：\n\n        - 设置 **IEX_TOKEN** 环境变量以从 IEX Cloud 数据源获取数据：\n\n        ::\n\n            export IEX_TOKEN=YOUR_IEX_TOKEN\n\n        - 设置 **TD_TOKEN** 环境变量以从 Tradier 数据源获取数据：\n\n        ::\n\n            export TD_TOKEN=YOUR_TRADIER_TOKEN\n\n        - 使用以下命令获取数据：\n\n        ::\n\n            fetch -t SPY\n\n        - 如果只想从 **IEX** 获取数据，可以使用 **-g iex**：\n\n        ::\n\n            fetch -t SPY -g iex\n            # 如果只想从 Tradier 获取数据，则使用：\n            # fetch -t SPY -g td\n\n        - 从 IEX Cloud 获取过去 30 个自然日的盘中分钟级行情数据：\n\n        ::\n\n            backfill-minute-data.sh TICKER\n            # backfill-minute-data.sh SPY\n\n    #.  更多关于控制行情请求使用方式的示例，请参阅 `文档 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fscripts.html#module-analysis_engine.scripts.fetch_new_stock_datasets>`__\n\n    .. note:: 因为 Yahoo 已经`禁用了 YQL 金融 API，所以默认情况下无法从 Yahoo 获取行情数据 \u003Chttps:\u002F\u002Fdeveloper.yahoo.com\u002Fyql\u002F>`__\n\n#.  在 Redis 中查看压缩后的行情数据\n\n    ::\n\n        redis-cli keys \"SPY_*\"\n        redis-cli get \"\u003Ckey like SPY_2019-01-08_minute>\"\n\n使用算法运行器 API 进行回测\n===========================================\n\n使用最新行情数据运行回测：\n\n.. code-block:: python\n\n    import analysis_engine.algo_runner as algo_runner\n    import analysis_engine.plot_trading_history as plot\n    runner = algo_runner.AlgoRunner('SPY')\n    # 使用最近 200 分钟的数据运行算法：\n    df = runner.latest()\n    print(df[['minute', 'close']].tail(5))\n    plot.plot_trading_history(\n        title=(\n            f'SPY - ${df[\"close\"].iloc[-1]} at: '\n            f'{df[\"minute\"].iloc[-1]}'),\n        df=df)\n    # 启动完整回测：\n    # runner.start()\n\n请查看 `backtest_with_runner.py 脚本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fbacktest_with_runner.py>`__，其中包含一个使用 `算法运行器 API \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Falgo_runner.html>`__ 并根据 `算法回测配置文件 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcfg\u002Fdefault_algo.json>`__ 运行和绘图的命令行示例。\n\n从 Redis API 提取数据\n======================\n\n数据提取完成后，您可以使用以下方法从 Redis 缓存中提取数据集：\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract('SPY'))\n\n提取股票和期权的最新分钟级行情\n====================================================\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract(\n        'SPY',\n        datasets=['minute', 'tdcalls', 'tdputs']))\n\n提取历史数据\n-----------------------\n\n使用格式为 ``YYYY-MM-DD`` 的 ``date`` 参数提取历史数据：\n\n.. code-block:: python\n\n    import analysis_engine.extract as ae_extract\n    print(ae_extract.extract(\n        'AAPL',\n        datasets=['minute', 'daily', 'financials', 'earnings', 'dividends'],\n        date='2019-02-15'))\n\n其他提取 API\n==========================\n\n- `提取 API 参考 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fextract.html>`__\n- `IEX Cloud 提取 API 参考 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Fiex_api.html#iex-extraction-api-reference>`__\n- `Tradier 提取 API 参考 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Ftradier.html#tradier-extraction-api-reference>`__\n- `检查 Redis 中缓存的数据集是否存在错误 \u003Chttps:\u002F\u002Fstock-analysis-engine.readthedocs.io\u002Fen\u002Flatest\u002Finspect_datasets.html#module-analysis_engine.scripts.inspect_datasets>`__\n\n备份\n=======\n\nRedis 中的价格数据会自动压缩，并且有一个用于将所有存储的价格数据备份到 AWS S3 的 Kubernetes 作业示例 `\u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fbackups\u002Fbackup-to-aws-job.yml>`__。\n\n在本地运行全栈以进行回测和实时交易分析\n========================================================================\n\n虽然回测并不需要运行整个系统，但在实时交易时段运行算法时则必须运行全栈。以下视频展示了如何使用 Docker Compose 和视频中的命令在本地部署整个系统。\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220487.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220487?autoplay=1\n    :alt: 在本地运行全栈以进行回测和实时交易分析\n\n#.  启动工作节点、回测器、价格数据采集、Jupyter、Redis 和 Minio\n\n    现在使用下面的命令启动剩余的组件。这将拉取 `约 3.0 GB 的 stock-analysis-engine Docker 镜像 \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fjayjohnson\u002Fstock-analysis-engine>`__，并启动工作节点、回测器、数据集采集以及 `Jupyter 镜像 \u003Chttps:\u002F\u002Fhub.docker.com\u002Fr\u002Fjayjohnson\u002Fstock-analysis-jupyter>`__。如果 Redis 和 Minio 尚未运行，它们也会被启动。\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh\n\n    .. tip:: 对于 Mac OS X 用户，请注意 `存在一个与 network_mode: \"host\" 相关的已知 Docker Compose 问题 \u003Chttps:\u002F\u002Fgithub.com\u002Fdocker\u002Ffor-mac\u002Fissues\u002F1031>`__，因此您可能在尝试连接服务时遇到问题。\n\n#.  检查 Docker 容器\n\n    ::\n\n        docker ps -a\n\n#.  查看数据集采集日志\n\n    ::\n\n        logs-dataset-collection.sh\n\n#.  等待定价引擎日志停止，按 ``ctrl+c``\n\n    ::\n\n        logs-workers.sh\n\n#.  验证 Redis 中是否存在价格数据\n\n    ::\n\n        redis-cli keys \"*\"\n\n#.  可选 — 使用 `automation-dataset-collection.yml Docker Compose 文件自动化价格数据采集 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fautomation-dataset-collection.yml>`__：\n\n    .. note:: 根据您希望日内算法运行的速度，您可以使用此 Docker Compose 作业，或者使用 `Kubernetes 作业 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fjob.yml>`__，亦或是 `仅从 Tradier 获取数据的 Kubernetes 作业 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml>`__ 来收集最新的价格信息。\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -c\n\n运行自定义的逐分钟日内算法回测并绘制交易历史\n======================================================================================\n\n当 Redis 中已有价格数据后，您可以通过多种方式开始运行回测：\n\n- `在 Jupyter Notebook 中比较三种训练用于预测股票收盘价的深度神经网络 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FComparing-3-Deep-Neural-Networks-Trained-to-Predict-a-Stocks-Closing-Price-Using-The-Analysis-Engine.ipynb>`__\n- `在 Jupyter Notebook 中构建、运行并调优算法，在运行过程中绘制余额与股票收盘价的关系图 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FRun-a-Custom-Trading-Algorithm-Backtest-with-Minute-Timeseries-Pricing-Data.ipynb>`__\n- `使用此 Jupyter Notebook 分析并重放存储在 S3 中的算法交易历史 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FAnalyze%20Compressed%20Algorithm%20Trading%20Histories%20Stored%20in%20S3.ipynb>`__\n- `使用命令行回测工具运行 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py>`__\n- `进阶 — 构建一个独立的算法类以进行交易分析 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__\n\n使用实时日内价格数据运行算法\n====================================================\n\n以下视频展示了如何运行该算法：\n\n.. image:: https:\u002F\u002Fasciinema.org\u002Fa\u002F220498.png\n    :target: https:\u002F\u002Fasciinema.org\u002Fa\u002F220498?autoplay=1\n    :alt: 使用实时日内价格数据运行算法\n\n`回测命令行工具 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py>`__ 使用一个 `算法配置字典 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Ftests\u002Falgo_configs\u002Ftest_5_days_ahead.json>`__ 来构建多个 `威廉指标 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Frun_backtest_and_plot_history.py#L49>`__ 成一个起始资金为 **10,000.00 美元** 的算法。配置完成后，回测工具会遍历每个交易数据集，并根据价格数据判断是否应该买入或卖出。处理完毕后，该工具将使用 matplotlib 和 seaborn 绘制一张图表，显示算法的 **余额** 以及股票每分钟的 **收盘价**。\n\n::\n\n    # 这可能需要几分钟时间来评估\n    # 因为随着更多数据的收集\n    # 每天都有 390 行数据需要处理\n    bt -t SPY -f \u002Ftmp\u002Fhistory.json\n\n.. note:: 算法的 **交易历史** 数据集提供了许多额外的列，可用于调整指标和自定义买卖规则。为了减少等待算法完成处理的时间，您可以使用 ``-f \u003Csave_to_file>`` 参数将整个交易历史保存到磁盘上。\n\n从文件中查看分钟级算法的交易历史\n=======================================================\n\n一旦 **交易历史** 被保存到磁盘上，您可以重新打开它，并使用以下命令绘制数据集中其他列的图表：\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FpH368gy.png\n\n::\n\n    # 默认情况下，图表会显示\n    # 每分钟的余额与收盘价\n    plot-history -f \u002Ftmp\u002Fhistory.json\n\n运行自定义算法并仅使用当日价格数据保存交易历史\n==================================================================================\n\n以下是如何在实时交易时段运行算法的方法。这种方法假设有另一个进程或 cron 正在使用该引擎获取价格数据，以便算法可以访问最新的价格数据：\n\n::\n\n    bt -t SPY -f \u002Ftmp\u002FSPY-history-$(date +\"%Y-%m-%d\").json -j $(date +\"%Y-%m-%d\")\n\n.. note:: 使用 ``-j \u003CDATE>`` 参数可以让算法在开始交易前 **跳转到指定日期**。这对于调试指标、算法、数据集问题以及买卖规则非常有帮助。\n\n使用外部算法模块和配置文件进行回测\n=================================================================\n\n使用包含在单个 Python 模块文件中的独立算法类，并通过磁盘上的配置文件来运行算法回测，该模块文件甚至可以位于仓库之外：\n\n::\n\n    ticker=SPY\n    config=\u003CCUSTOM_ALGO_CONFIG_DIR>\u002Fminute_algo.json\n    algo_mod=\u003CCUSTOM_ALGO_MODULE_DIR>\u002Fminute_algo.py\n    bt -t ${ticker} -c ${algo_config} -g ${algo_mod}\n\n或者，配置文件也可以使用 ``\"algo_path\": \"\u003CPATH_TO_FILE>\"`` 来设置外部算法模块文件的路径。\n\n::\n\n    bt -t ${ticker} -c ${algo_config}\n\n.. note:: 使用独立算法类时，必须从 ``analysis_engine.algo.BaseAlgo`` 类派生。\n\n构建您自己的交易算法\n====================\n\n除了运行回测之外，所包含的引擎还支持同时运行多种算法并为实时交易或回测获取数据。随着您开始采用这种方法，您将生成大量的算法定价数据集、历史数据集，以及即将推出的用于 AI 训练的绩效数据集。由于这些算法使用相同的数据集结构，您可以与团队共享**开箱即用**的数据集，并将其发布到 S3 上，以便通过 Lambda 函数启动回测，或者仅用于灾难恢复的归档。\n\n.. note:: 回测可以使用来自 S3、Redis 或文件的 **开箱即用** 数据集。\n\n下一节将介绍如何从 Redis 中缓存的定价数据中构建 `适用于算法的数据集 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#extract-algorithm-ready-datasets>`__。\n\n运行本地回测并将算法交易历史发布到 S3\n========================================\n\n::\n\n    ae -t SPY -p s3:\u002F\u002Falgohistory\u002Falgo_training_SPY.json\n\n使用 ``-w`` 在引擎的工作节点上分布式运行\n\n::\n\n    ae -w -t SPY -p s3:\u002F\u002Falgohistory\u002Falgo_training_SPY.json\n\n使用算法配置运行本地回测并提取适用于算法的数据集\n=====================================================\n\n使用此命令可使用附带的 `算法配置 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Ftests\u002Falgo_configs\u002Ftest_5_days_ahead.json>`__ 启动本地回测。回测完成后，还将生成一个保存到文件的本地算法就绪数据集。\n\n#.  定义常用值\n\n    ::\n\n        ticker=SPY\n        algo_config=tests\u002Falgo_configs\u002Ftest_5_days_ahead.json\n        extract_loc=file:\u002Ftmp\u002Falgoready-SPY-latest.json\n        history_loc=file:\u002Ftmp\u002Fhistory-SPY-latest.json\n        load_loc=${extract_loc}\n\n运行带有提取和历史记录发布的算法\n-----------------------------------\n\n::\n\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -e ${extract_loc} -p ${history_loc}\n\n使用 vprof 分析您的算法代码性能\n================================\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002F1cwDUBC.png\n\npip 包含 `vprof，用于分析算法性能（CPU、内存、剖析器和热图——与资金无关） \u003Chttps:\u002F\u002Fgithub.com\u002Fnvdv\u002Fvprof>`__，上述 CPU 火焰图就是使用它生成的。\n\n按照以下步骤分析您的算法代码性能：\n\n#.  在第一个终端以远程模式启动 vprof\n\n    .. note:: 此命令将在端口 ``3434`` 上启动一个 Web 应用程序\n\n    ::\n\n        vprof -r -p 3434\n\n#.  在第二个终端启动剖析器\n\n    .. note:: 此命令会将数据推送到另一个终端上监听端口 ``3434`` 的 Web 应用程序\n\n    ::\n\n        vprof -c cm .\u002Fanalysis_engine\u002Fperf\u002Fprofile_algo_runner.py\n\n使用算法配置和算法就绪数据集运行本地回测\n=============================================\n\n在生成本地算法就绪数据集后（这可能需要一些时间），使用此命令通过磁盘上的文件再次运行回测：\n\n::\n\n    dev_history_loc=file:\u002Ftmp\u002Fdev-history-${ticker}-latest.json\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc}\n\n查看买入和卖出交易\n------------------\n\n::\n\n    run-algo-history-to-file.sh -t ${ticker} -c ${algo_config} -l ${load_loc} -p ${dev_history_loc} | grep \"TRADE\"\n\n绘制交易历史工具\n==================\n\n绘制包含高点、低点、开盘价和收盘价的时间序列交易历史\n----------------------------------------------------------\n\n::\n\n    sa -t SPY -H ${dev_history_loc}\n\n运行并发布自定义算法的交易绩效报告\n====================================\n\n此操作将按顺序对过去 60 天进行回测，并运行 `作为类示例的独立算法 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__。完成后，它会将交易绩效报告发布到文件或 Minio（S3）。\n\n将交易绩效报告写入本地文件\n-----------------------------\n\n::\n\n    run-algo-report-to-file.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n    # run-algo-report-to-file.sh -t \u003CTICKER> -b \u003CNUM_DAYS_BACK> -a \u003CCUSTOM_ALGO_MODULE>\n    # 可以通过以下参数运行特定日期范围：\n    # -s \u003Cstart date YYYY-MM-DD> -n \u003Cend date YYYY-MM-DD>\n\n将交易绩效报告写入 Minio（S3）\n---------------------------------\n\n::\n\n    run-algo-report-to-s3.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\n运行并发布自定义算法的交易历史\n==================================\n\n此操作将按顺序对过去 60 天进行全面回测，并运行 `示例算法 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py>`__。完成后，它会将交易历史发布到文件或 Minio（S3）。\n\n将交易历史写入本地文件\n-------------------------\n\n::\n\n    run-algo-history-to-file.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\n将交易历史写入 Minio（S3）\n---------------------------\n\n::\n\n    run-algo-history-to-s3.sh -t SPY -b 60 -a \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\n在 AWS 上开发\n==============\n\n如果您对 AWS S3 的使用费用感到放心，那么您只需使用一个 Redis 服务器即可开发和调优算法。这对于团队以及用于灾难恢复的数据集归档都非常有用。\n\n环境变量\n----------\n\n根据您的 AWS IAM 凭证和 S3 终端节点导出以下变量。\n\n::\n\n    export AWS_ACCESS_KEY_ID=\"ACCESS\"\n    export AWS_SECRET_ACCESS_KEY=\"SECRET\"\n    export S3_ADDRESS=s3.us-east-1.amazonaws.com\n\n提取并发布到 AWS S3\n====================\n\n::\n\n    .\u002Ftools\u002Fbackup-datasets-on-s3.sh -t TICKER -q YOUR_BUCKET -k ${S3_ADDRESS} -r localhost:6379\n\n发布到自定义 AWS S3 存储桶和密钥\n===================================\n\n::\n\nextract_loc=s3:\u002F\u002FYOUR_BUCKET\u002FTICKER-latest.json\n    .\u002Ftools\u002Fbackup-datasets-on-s3.sh -t TICKER -e ${extract_loc} -r localhost:6379\n\n使用 AWS S3 上的数据集回测自定义算法\n====================================\n\n::\n\n    backtest_loc=s3:\u002F\u002FYOUR_BUCKET\u002FTICKER-latest.json\n    custom_algo_module=\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n    sa -t TICKER -a ${S3_ADDRESS} -r localhost:6379 -b ${backtest_loc} -g ${custom_algo_module}\n\n通过 Kubernetes 每分钟获取新的 Tradier 价格数据\n=================================================\n\n如果您想从 `Tradier \u003Chttps:\u002F\u002Fdeveloper.tradier.com\u002Fgetting_started>`__ 获取并追加新的期权定价数据，可以使用随附的 Kubernetes 作业配合 Cron 定时任务，每分钟拉取一次新数据：\n\n::\n\n    kubectl -f apply \u002Fopt\u002Fsa\u002Fk8\u002Fdatasets\u002Fpull_tradier_per_minute.yml\n\n在 SPY 上运行分布式 60 天回测，并将交易报告、交易历史及算法就绪数据集发布至 S3\n==============================================================================\n\n将回测和实盘交易算法发布到引擎的工作节点上，以便同时运行多个算法。完成后，算法会将结果发布到 S3、Redis 或本地文件中。默认情况下，以下示例会将所有数据集发布到 Minio（S3）中，用户可将其下载用于离线回测，或重新加载回 Redis。\n\n.. 注意：运行分布式算法工作负载需要 Redis、Minio 和分析引擎处于运行状态。\n\n::\n\n    num_days_back=60\n    .\u002Ftools\u002Frun-algo-with-publishing.sh -t SPY -b ${num_days_back} -w\n\n在本地对 SPY 进行 60 天回测，并将交易报告、交易历史及算法就绪数据集发布至 S3\n============================================================================\n\n::\n\n    num_days_back=60\n    .\u002Ftools\u002Frun-algo-with-publishing.sh -t SPY -b ${num_days_back}\n\n或者手动执行：\n\n::\n\n    ticker=SPY\n    num_days_back=60\n    use_date=$(date +\"%Y-%m-%d\")\n    ds_id=$(uuidgen | sed -e 's\u002F-\u002F\u002Fg')\n    ticker_dataset=\"${ticker}-${use_date}_${ds_id}.json\"\n    echo \"creating ${ticker} dataset: ${ticker_dataset}\"\n    extract_loc=\"s3:\u002F\u002Falgoready\u002F${ticker_dataset}\"\n    history_loc=\"s3:\u002F\u002Falgohistory\u002F${ticker_dataset}\"\n    report_loc=\"s3:\u002F\u002Falgoreport\u002F${ticker_dataset}\"\n    backtest_loc=\"s3:\u002F\u002Falgoready\u002F${ticker_dataset}\"  # same as the extract_loc\n    processed_loc=\"s3:\u002F\u002Falgoprocessed\u002F${ticker_dataset}\"  # archive it when done\n    start_date=$(date --date=\"${num_days_back} day ago\" +\"%Y-%m-%d\")\n    echo \"\"\n    echo \"extracting algorithm-ready dataset: ${extract_loc}\"\n    echo \"sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date}\"\n    sa -t SPY -e ${extract_loc} -s ${start_date} -n ${use_date}\n    echo \"\"\n    echo \"running algo with: ${backtest_loc}\"\n    echo \"sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date}\"\n    sa -t SPY -p ${history_loc} -o ${report_loc} -b ${backtest_loc} -e ${processed_loc} -s ${start_date} -n ${use_date}\n\nKubernetes 上的 Jupyter\n=====================\n\n此命令将在 `AntiNex Kubernetes 集群 \u003Chttps:\u002F\u002Fdeploy-to-kubernetes.readthedocs.io\u002Fen\u002Flatest\u002F>`__ 上运行 Jupyter：\n\n::\n\n    .\u002Fk8\u002Fjupyter\u002Frun.sh ceph dev\n\nKubernetes - 从交易历史中分析和调优算法\n======================================\n\n部署了分析引擎的 Jupyter 实例后，您可以使用此笔记本 `从交易历史中调优算法 \u003Chttps:\u002F\u002Faejupyter.example.com\u002Fnotebooks\u002FAnalyze%20Compressed%20Algorithm%20Trading%20Histories%20Stored%20in%20S3.ipynb>`__。\n\nKubernetes 作业 - 导出 SPY 数据集并发布到 Minio\n================================================\n\n手动运行时可使用 ``ssh-eng`` 别名：\n\n::\n\n    function ssheng() {\n        pod_name=$(kubectl get po | grep ae-engine | grep Running |tail -1 | awk '{print $1}')\n        echo \"logging into ${pod_name}\"\n        kubectl exec -it ${pod_name} bash\n    }\n    ssheng\n    # once inside the container on kubernetes\n    source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n    sa -a minio-service:9000 -r redis-master:6379 -e s3:\u002F\u002Fbackups\u002FSPY-$(date +\"%Y-%m-%d\") -t SPY\n\n查看算法就绪数据集\n--------------------\n\n配置好 AWS CLI 后，您可以通过以下命令查看 Minio（S3）存储桶中可用的算法就绪数据集：\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgoready\n\n查看交易历史数据集\n--------------------\n\n配置好 AWS CLI 后，您可以通过以下命令查看 Minio（S3）存储桶中可用的交易历史数据集：\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgohistory\n\n查看交易绩效报告数据集\n----------------------\n\n配置好 AWS CLI 后，您可以通过以下命令查看 Minio（S3）存储桶中可用的交易绩效报告数据集：\n\n::\n\n    aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Falgoreport\n\n进阶 - 离线运行算法回测\n========================\n\n借助 `已在 Minio（S3）、Redis 或文件中提取的算法就绪数据集 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#extract-algorithm-ready-datasets>`__，您可以在本地开发和调优自己的算法，而无需运行 Redis、Minio、分析引擎或 Jupyter。\n\n使用算法就绪文件进行离线自定义算法回测\n----------------------------------------\n\n::\n\n    # extract with:\n    sa -t SPY -e file:\u002Ftmp\u002FSPY-latest.json\n    sa -t SPY -b file:\u002Ftmp\u002FSPY-latest.json -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py\n\n运行日内逐分钟算法并将算法就绪数据集发布到 S3\n-------------------------------------------------\n\n运行随附的独立算法 `https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py`__，使用最新的定价数据集：\n\n::\n\n    sa -t SPY -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py -e s3:\u002F\u002Falgoready\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n若需调试算法的历史交易表现，可添加 ``-d`` 调试标志：\n\n::\n\n    sa -d -t SPY -g \u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fmocks\u002Fexample_algo_minute.py -e s3:\u002F\u002Falgoready\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n提取算法就绪数据集\n==================\n\n当定价数据缓存在 Redis 中时，您可以提取算法就绪数据集并将其保存为本地文件，以供离线历史回测分析使用。这同时也作为一份本地备份，将单个股票的所有缓存数据集中在一个本地文件中。\n\n从 Redis 提取算法就绪数据集并保存为文件\n-----------------------------------------\n\n::\n\n    sa -t SPY -e ~\u002FSPY-latest.json\n\n创建每日备份\n--------------\n\n::\n\nsa -t SPY -e ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n通过检查数据集文件验证每日备份\n-------------------------------------------------------\n\n::\n\n    sa -t SPY -l ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n通过检查数据集文件验证每日备份\n-------------------------------------------------------\n\n::\n\n    sa -t SPY -l ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n将备份恢复到 Redis\n-----------------------\n\n使用此命令缓存缺失的定价数据集，以便算法在做出买卖预测之前能够获得正确的准备就绪数据。\n\n.. note:: 默认情况下，此命令不会覆盖 Redis 中已有的数据集。它被设计为在虚拟机重启后合并 Redis 定价数据集的工具，此时过去几天的定价数据可能缺失（定价数据中的缺口对算法不利）。\n\n::\n\n    sa -t SPY -L ~\u002FSPY-$(date +\"%Y-%m-%d\").json\n\n获取\n-----\n\n在 Redis 和 Minio 运行的情况下（``.\u002Fcompose\u002Fstart.sh``），您可以获取、缓存、归档并返回所有最新 ticker 的数据集：\n\n.. code-block:: python\n\n    from analysis_engine.fetch import fetch\n    d = fetch(ticker='SPY')\n    for k in d['SPY']:\n        print(f'dataset key: {k}\\nvalue {d[\"SPY\"][k]}\\n')\n\n从 IEX Cloud 补充历史分钟级数据\n==============================================\n\n.. note:: `IEX Cloud 支持提取从今天起向前推算 30 天的数据 \u003Chttps:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices>`__\n\n::\n\n    fetch -t TICKER -F PAST_DATE -g iex_min\n    # 示例：\n    # fetch -t SPY -F 2019-02-07 -g iex_min\n\n请参阅 `Stock Analysis 入门：提取数据集 Jupyter 笔记本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Extracting-Datasets.ipynb>`__ 获取最新的使用示例。\n\n.. list-table::\n   :header-rows: 1\n\n   * - `构建 \u003Chttps:\u002F\u002Ftravis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine>`__\n   * - .. image:: https:\u002F\u002Fapi.travis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine.svg\n           :alt: Travis 测试\n           :target: https:\u002F\u002Ftravis-ci.org\u002FAlgoTraders\u002Fstock-analysis-engine\n\n开始使用\n===============\n\n本节概述了如何在本地运行 Stock Analysis 技术栈，包括：\n\n- Redis\n- Minio (S3)\n- Stock Analysis 引擎\n- Jupyter\n\n从背景来看，该技术栈提供了一个数据管道，可自动将定价数据归档到 `minio (s3) \u003Chttps:\u002F\u002Fminio.io>`__ 并将定价数据缓存在 Redis 中。一旦数据被缓存或归档，自定义算法就可以利用这些定价信息来确定买入或卖出条件，并在历史回测中跟踪内部交易表现。\n\n从技术角度来看，该引擎使用 `Celery 工作进程处理重量级异步任务 \u003Chttp:\u002F\u002Fwww.celeryproject.org\u002F>`__，并且可以根据您需要运行的位置，`支持多种传输和后端实现以实现水平扩展 \u003Chttps:\u002F\u002Fgithub.com\u002Fcelery\u002Fcelery#transports-and-backends>`__。该技术栈可以通过 `Kubernetes \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine#running-on-kubernetes>`__ 或 Docker Compose 部署，并且 `支持向 Slack 发布交易警报 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Publishing-to-Slack.ipynb>`__。\n\n在技术栈已经运行的情况下，请参阅 `使用 Jupyter Notebook 的 Stock Analysis 入门 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb>`__ 获取更多入门示例。\n\n使用 Docker Compose 设置您的 Tradier 账户\n===================================================\n\n请在启动技术栈之前，在 Docker 环境文件中设置您的 Tradier 账户令牌：\n\n::\n\n    grep -r SETYOURTRADIERTOKENHERE compose\u002F*\n    compose\u002Fenvs\u002Fbacktester.env:TD_TOKEN=SETYOURTRADIERTOKENHERE\n    compose\u002Fenvs\u002Fworkers.env:TD_TOKEN=SETYOURTRADIERTOKENHER\n\n请导出变量以便本地开发：\n\n::\n\n    export TD_TOKEN=\u003CTRADIER_ACCOUNT_TOKEN>\n\n.. note:: 设置 Tradier 令牌环境变量后，请使用 ``.\u002Fcompose\u002Fstop.sh`` 停止技术栈，然后使用 ``.\u002Fcompose\u002Fstart.sh`` 重新启动。\n\n#.  启动 Redis 和 Minio\n\n    .. note:: Redis 和 Minio 容器被配置为将数据保存到 ``\u002Fdata`` 目录，这样文件可以在重启后继续保留。在 Mac OS X 上，请确保在 Docker 偏好设置 -> 文件共享选项卡中添加 ``\u002Fdata``（以及用于 Jupyter 笔记本的 ``\u002Fdata\u002Fsa\u002Fnotebooks``），并在尝试启动容器之前让 Docker 守护进程重新启动。否则，您可能会看到类似以下错误：\n\n       ::\n\n            ERROR: for minio  Cannot start service minio:\n            b'Mounts denied: \\r\\nThe path \u002Fdata\u002Fminio\u002Fdata\\r\\nis not shared from OS X\n\n        以下是手动创建共享卷目录的命令：\n\n        ::\n\n            sudo mkdir -p -m 777 \u002Fdata\u002Fredis\u002Fdata \u002Fdata\u002Fminio\u002Fdata \u002Fdata\u002Fsa\u002Fnotebooks\u002Fdev \u002Fdata\u002Fregistry\u002Fauth \u002Fdata\u002Fregistry\u002Fdata\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh\n\n#.  验证 Redis 和 Minio 是否正在运行\n\n    ::\n\n        docker ps | grep -E \"redis|minio\"\n\n在 Ubuntu 和 CentOS 上运行\n============================\n\n#.  安装软件包\n\n    Ubuntu\n\n    ::\n\n        sudo apt-get install make cmake gcc python3-distutils python3-tk python3 python3-apport python3-certifi python3-dev python3-pip python3-venv python3.6 redis-tools virtualenv libcurl4-openssl-dev libssl-dev\n\n    CentOS 7\n\n    ::\n\n        sudo yum install cmake gcc gcc-c++ make tkinter curl-devel make cmake python-devel python-setuptools python-pip python-virtualenv redis python36u-libs python36u-devel python36u-pip python36u-tkinter python36u-setuptools python36u openssl-devel\n\n#.  安装 TA-Lib\n\n    请按照 `TA-Lib 安装指南 \u003Chttps:\u002F\u002Fmrjbq7.github.io\u002Fta-lib\u002Finstall.html>`__ 操作，或者以 root 用户身份使用附带的安装工具：\n\n    ::\n\n        sudo su\n        \u002Fopt\u002Fsa\u002Ftools\u002Flinux-install-talib.sh\n        exit\n\n#.  创建并加载 Python 3 虚拟环境\n\n    ::\n\n        virtualenv -p python3 \u002Fopt\u002Fvenv\n        source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n        pip install --upgrade pip setuptools\n\n#.  安装 Analysis Pip\n\n    ::\n\n        pip install -e .\n\n\n#.  验证 Pip 是否已安装\n\n    ::\n\n        pip list | grep stock-analysis-engine\n\n在 Mac OS X 上运行\n===================\n\n#.  下载 Python 3.6\n\n    .. note:: Celery 不支持 Python 3.7，因此请确保使用 Python 3.6 版本。\n\n    https:\u002F\u002Fwww.python.org\u002Fdownloads\u002Fmac-osx\u002F\n\n#.  安装软件包\n\n    ::\n\n        brew install openssl pyenv-virtualenv redis freetype pkg-config gcc ta-lib\n\n    .. note:: 对于 Mac OS X 用户，请注意，目前尚未调试完成 ``keras``、``tensorflow`` 和 ``h5py`` 的安装问题。如果您在设置环境时遇到任何问题，请告知我们，因为我们可能尚未遇到过此类问题。\n\n#.  创建并加载 Python 3 虚拟环境\n\n    ::\n\n        python3 -m venv \u002Fopt\u002Fvenv\n        source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n        pip install --upgrade pip setuptools\n\n#.  安装证书\n\n在遇到 SSL 验证错误后，我找到了这个 Stack Overflow 帖子 \u003Chttps:\u002F\u002Fstackoverflow.com\u002Fquestions\u002F42098126\u002Fmac-osx-python-ssl-sslerror-ssl-certificate-verify-failed-certificate-verify>__，它指出对于 Python 3.6 的设置还需要额外的一步：\n\n    ::\n\n        \u002FApplications\u002FPython\\ 3.6\u002FInstall\\ Certificates.command\n\n#.  使用 OpenSSL 安装 PyCurl\n\n    ::\n\n        PYCURL_SSL_LIBRARY=openssl LDFLAGS=\"-L\u002Fusr\u002Flocal\u002Fopt\u002Fopenssl\u002Flib\" CPPFLAGS=\"-I\u002Fusr\u002Flocal\u002Fopt\u002Fopenssl\u002Finclude\" pip install --no-cache-dir pycurl\n\n#.  安装 Analysis Pip\n\n    ::\n\n        pip install --upgrade pip setuptools\n        pip install -e .\n\n#.  验证 Pip 是否已安装\n\n    ::\n\n        pip list | grep stock-analysis-engine\n\n启动工作进程\n=============\n\n::\n\n    .\u002Fstart-workers.sh\n\n获取并发布定价数据\n====================\n\n请参考仓库中的最新 API 文档：\n\nhttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fapi_requests.py\n\n获取新的股票数据集\n====================\n\n使用 `.\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__ 运行股票代码分析：\n\n收集某个股票代码或证券符号的所有数据集\n-------------------------------------------\n\n收集股票代码 **SPY** 的所有数据集：\n\n::\n\n    fetch -t SPY\n\n.. note:: 这需要以下服务正在监听：\n\n    - redis ``localhost:6379``\n    - minio ``localhost:9000``\n\n查看引擎工作进程日志\n----------------------\n\n::\n\n    docker logs ae-workers\n\n在 Docker 容器内运行\n--------------------\n\n如果您使用的引擎是在 Docker 容器内运行的，那么 ``localhost`` 可能不是查找 ``redis`` 和 ``minio`` 的正确网络主机名。\n\n如果您正在使用 `integration \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fintegration.yml>`__ 或 `notebook integration \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fnotebook-integration.yml>`__ 这些用于部署分析引擎堆栈的 Docker Compose 文件，请根据需要设置这些值，以便发布和归档数据集工件：\n\n::\n\n    fetch -t SPY -a 0.0.0.0:9000 -r 0.0.0.0:6379\n\n.. warning:: 不建议让多个引擎工作进程同时共享同一个 Redis 服务器，无论它们是在 Docker 容器内还是容器外。这是因为目前 ``REDIS_ADDRESS`` 和 ``S3_ADDRESS`` 只能设置为一个字符串值。如果某个任务被错误的引擎（无法连接到正确的 Redis 和 Minio）拾取，就可能导致数据无法正确缓存或归档，并显示为连接失败。\n\n详细使用示例\n--------------\n\n`fetch_new_stock_datasets.py 脚本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__ 支持许多参数。如果您有自定义的 ``redis`` 和 ``minio`` 部署，例如在 Kubernetes 上的 `minio-service:9000 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002F7323ad4007b44eaa511d448c8eb500cec9fe3848\u002Fk8\u002Fengine\u002Fdeployment.yml#L80-L81>`__ 和 `redis-master:6379 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002F7323ad4007b44eaa511d448c8eb500cec9fe3848\u002Fk8\u002Fengine\u002Fdeployment.yml#L88-L89>`__，可以这样设置：\n\n- S3 认证（``-k`` 和 ``-s``）\n- S3 端点（``-a``）\n- Redis 端点（``-r``）\n- 自定义 S3 Key 和 Redis Key 名称（``-n``）\n\n::\n\n    fetch -t SPY -g all -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n SPY_demo -P 1 -N 1 -O 1 -U 1 -R 1\n\n用法\n-----\n\n请参考 `fetch_new_stock_datasets.py 脚本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Ffetch_new_stock_datasets.py>`__ 中最新的支持用法，因为以下内容可能已经过时：\n\n::\n\n    fetch -h\n    2019-02-11 01:55:33,791 - fetch - INFO - start - fetch_new_stock_datasets\n    用法: fetch_new_stock_datasets.py [-h] [-t TICKER] [-g FETCH_MODE]\n                                    [-i TICKER_ID] [-e EXP_DATE_STR]\n                                    [-l LOG_CONFIG_PATH] [-b BROKER_URL]\n                                    [-B BACKEND_URL] [-k S3_ACCESS_KEY]\n                                    [-s S3_SECRET_KEY] [-a S3_ADDRESS]\n                                    [-S S3_SECURE] [-u S3_BUCKET_NAME]\n                                    [-G S3_REGION_NAME] [-p REDIS_PASSWORD]\n                                    [-r REDIS_ADDRESS] [-n KEYNAME]\n                                    [-m REDIS_DB] [-x REDIS_EXPIRE] [-z STRIKE]\n                                    [-c CONTRACT_TYPE] [-P GET_PRICING]\n                                    [-N GET_NEWS] [-O GET_OPTIONS]\n                                    [-U S3_ENABLED] [-R REDIS_ENABLED]\n                                    [-A ANALYSIS_TYPE] [-L URLS] [-Z] [-d]\n\n下载并存储最新的股票定价、新闻和期权链数据，并将其存储在 Minio（S3）和 Redis 中。还支持获取 FinViz 筛选器中的股票代码。\n\n可选参数：\n    -h, --help          显示此帮助消息并退出\n    -t TICKER           股票代码\n    -g FETCH_MODE       可选 - 获取模式：initial = 默认从初始数据源（IEX和Tradier）获取，intra = 从IEX和Tradier获取日内数据，daily = 从IEX获取日线数据，weekly = 从IEX获取周线数据，all = 从所有数据源获取，td = 仅从Tradier数据源获取，iex = 仅从IEX Cloud数据源获取，iex_min = 获取IEX Cloud每分钟的日内数据\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices iex_day = 获取IEX Cloud的日线数据\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#historical-prices\n                        iex_quote = 获取IEX Cloud的报价数据\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#quote iex_stats = 获取IEX Cloud的关键统计数据\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#key-stats iex_peers = 仅从IEX Cloud的同业数据获取\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#peers iex_news = 获取IEX Cloud的新闻数据 https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#news\n                        iex_fin = 获取IEX Cloud的财务数据\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#financials iex_earn = 仅从IEX Cloud的收益数据获取\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#earnings iex_div = 仅从IEX Cloud的股息数据获取\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#dividends iex_comp = 仅从IEX Cloud的公司数据获取\n                        https:\u002F\u002Fiexcloud.io\u002Fdocs\u002Fapi\u002F#company\n    -i TICKER_ID        可选 - 股票代码ID，在没有数据库的情况下不使用\n    -e EXP_DATE_STR     可选 - 期权到期日期\n    -l LOG_CONFIG_PATH  可选 - 日志配置文件路径\n    -b BROKER_URL       可选 - Celery的经纪人URL\n    -B BACKEND_URL      可选 - Celery的后端URL\n    -k S3_ACCESS_KEY    可选 - S3访问密钥\n    -s S3_SECRET_KEY    可选 - S3秘密密钥\n    -a S3_ADDRESS       可选 - S3地址格式： \u003Chost:port>\n    -S S3_SECURE        可选 - S3是否启用SSL\n    -u S3_BUCKET_NAME   可选 - S3存储桶名称\n    -G S3_REGION_NAME   可选 - S3区域名称\n    -p REDIS_PASSWORD   可选 - redis密码\n    -r REDIS_ADDRESS    可选 - redis地址格式： \u003Chost:port>\n    -n KEYNAME          可选 - redis和s3密钥名称\n    -m REDIS_DB         可选 - redis数据库编号（默认为0）\n    -x REDIS_EXPIRE     可选 - redis过期时间（秒）\n    -z STRIKE           可选 - 行权价\n    -c CONTRACT_TYPE    可选 - 合约类型，“C”表示看涨期权，“P”表示看跌期权\n    -P GET_PRICING      可选 - 是否获取定价数据，1表示启用，0表示禁用\n    -N GET_NEWS         可选 - 是否获取新闻数据，1表示启用，0表示禁用\n    -O GET_OPTIONS      可选 - 是否获取期权数据，1表示启用，0表示禁用\n    -U S3_ENABLED       可选 - 是否启用S3发布功能，1表示启用，0表示禁用\n    -R REDIS_ENABLED    可选 - 是否启用redis发布功能，1表示启用，0表示禁用\n    -A ANALYSIS_TYPE    可选 - 运行分析支持的模式：scn\n    -L URLS             可选 - 用于拉取待分析股票代码的筛选器网址\n    -Z                  禁用无引擎运行，用于本地测试和演示\n    -d                  调试\n\n运行FinViz筛选器驱动的分析\n===================================\n\n这是一项正在进行中的工作，但筛选器驱动的工作流程如下：\n\n#.  将FinViz筛选器转换为股票代码列表\n    并从每个股票代码的HTML行中生成一个``pandas.DataFrames``\n#.  构建唯一的股票代码列表\n#.  拉取每个股票代码的数据集\n#.  执行卖方处理 - 即将推出\n#.  执行买方处理 - 即将推出\n#.  发送提醒至Slack - 即将推出\n\n以下是如何对在两个FinViz筛选器网址中找到的所有唯一股票代码进行分析的方法：\n\nhttps:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4\n和\nhttps:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4\n\n::\n\n    fetch -A scn -L 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4'\n\n从现有S3密钥到Redis的发布\n============================================\n\n#.  将集成测试密钥上传到S3\n\n    ::\n\n        export INT_TESTS=1\n        python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload\n\n#.  确认集成测试密钥已在S3中\n\n    http:\u002F\u002Flocalhost:9000\u002Fminio\u002Fintegration-tests\u002F\n\n#.  使用`.\u002Fanalysis_engine\u002Fscripts\u002Fpublish_from_s3_to_redis.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fpublish_from_s3_to_redis.py>`__，通过现有S3密钥运行分析\n\n    ::\n\n        publish_from_s3_to_redis.py -t SPY -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1\n\n#.  确认密钥现在已存在于Redis中\n\n    ::\n\n        .\u002Ftools\u002Fredis-cli.sh\n        127.0.0.1:6379> keys *\n        keys *\n        1) \"SPY_demo_daily\"\n        2) \"SPY_demo_minute\"\n        3) \"SPY_demo_company\"\n        4) \"integration-test-v1\"\n        5) \"SPY_demo_stats\"\n        6) \"SPY_demo\"\n        7) \"SPY_demo_quote\"\n        8) \"SPY_demo_peers\"\n        9) \"SPY_demo_dividends\"\n        10) \"SPY_demo_news1\"\n        11) \"SPY_demo_news\"\n        12) \"SPY_demo_options\"\n        13) \"SPY_demo_pricing\"\n        127.0.0.1:6379>\n\n从S3到Redis聚合并发布股票数据\n=================================================================\n\n#.  使用`.\u002Fanalysis_engine\u002Fscripts\u002Fpublish_ticker_aggregate_from_s3.py \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fanalysis_engine\u002Fscripts\u002Fpublish_ticker_aggregate_from_s3.py>`__，通过现有S3密钥运行分析\n\n    ::\n\n        publish_ticker_aggregate_from_s3.py -t SPY -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -u pricing -c compileddatasets\n\n#.  确认聚合后的股票数据现已存在于Redis中\n\n    ::\n\n        .\u002Ftools\u002Fredis-cli.sh\n        127.0.0.1:6379> keys *latest*\n        1) \"SPY_latest\"\n        127.0.0.1:6379>\n\n查看S3中的存档 - Minio\n===========================\n\n这里有一张截图，展示了在运行于`用于分布式AI预测的3节点Kubernetes集群 \u003Chttps:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fdeploy-to-kubernetes#deploying-a-distributed-ai-stack-to-kubernetes-on-centos>`__时创建的股市数据集存档\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FwDyPKAp.png\n\nhttp:\u002F\u002Flocalhost:9000\u002Fminio\u002Fpricing\u002F\n\n登录\n\n- 用户名: ``trexaccesskey``\n- 密码: ``trex123321``\n\n使用 AWS CLI 列出定价存储桶\n\n请参阅官方文档，了解如何将 ``awscli`` pip 与 MinIO 配合使用：\n\nhttps:\u002F\u002Fdocs.minio.io\u002Fdocs\u002Faws-cli-with-minio.html\n\n#.  导出凭证\n\n    ::\n\n        export AWS_SECRET_ACCESS_KEY=trex123321\n        export AWS_ACCESS_KEY_ID=trexaccesskey\n\n#.  列出存储桶\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls\n        2018-10-02 22:24:06 company\n        2018-10-02 22:24:02 daily\n        2018-10-02 22:24:06 dividends\n        2018-10-02 22:33:15 integration-tests\n        2018-10-02 22:24:03 minute\n        2018-10-02 22:24:05 news\n        2018-10-02 22:24:04 peers\n        2018-10-02 22:24:06 pricing\n        2018-10-02 22:24:04 stats\n        2018-10-02 22:24:04 quote\n\n#.  列出定价存储桶内容\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Fpricing\n\n#.  获取最新的 SPY 定价键\n\n    ::\n\n        aws --endpoint-url http:\u002F\u002Flocalhost:9000 s3 ls s3:\u002F\u002Fpricing | grep -i spy_demo\n        SPY_demo\n\n查看 Redis 中的缓存\n====================\n\n::\n\n    .\u002Ftools\u002Fredis-cli.sh\n    127.0.0.1:6379> keys *\n    1) \"SPY_demo\"\n\nJupyter\n=======\n\n您可以通过以下命令启动 `notebook-integration.yml 堆栈 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fnotebook-integration.yml>`__ 来运行 Jupyter 笔记本：\n\n.. warning:: 在 Mac OS X 上，目前 Jupyter 尚无法与分析引擎协同工作。欢迎提交 PR，但我们尚未找到解决方法，即如何在 `Mac OS X 上已知的 docker compose network_host 问题 \u003Chttps:\u002F\u002Fgithub.com\u002Fdocker\u002Ffor-mac\u002Fissues\u002F1031>`__ 的情况下共享笔记本并访问 Redis 和 MinIO。\n\n对于 Linux 用户，Jupyter 容器会在以下 URL 上托管 `股票分析入门笔记本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb>`__（默认登录密码为 ``admin``）：\n\nhttp:\u002F\u002Flocalhost:8888\u002Fnotebooks\u002FStock-Analysis-Intro.ipynb\n\n使用 RISE 进行 Jupyter 演示\n===============================\n\n该 Docker 容器预装了 `RISE \u003Chttps:\u002F\u002Fgithub.com\u002Fdamianavila\u002FRISE>`__，可用于从浏览器中运行笔记本演示。以下是用于启动网页演示的笔记本按钮：\n\n.. image:: https:\u002F\u002Fi.imgur.com\u002FIDMW2Oc.png\n\n使用 Docker 实现分布式自动化\n==================================\n\n.. note:: 自动化需要集成堆栈（Redis + MinIO + 引擎）以及 docker-compose 正在运行。\n\n数据集收集\n==================\n\n通过 docker-compose 启动自动化的数据集收集：\n\n::\n\n    .\u002Fcompose\u002Fstart.sh -c\n\nRedis 中的数据集\n=================\n\n运行数据集收集容器后，数据集应会自动缓存在 MinIO（http:\u002F\u002Flocalhost:9000\u002Fminio\u002Fpricing\u002F）和 Redis 中：\n\n::\n\n    .\u002Ftools\u002Fredis-cli.sh\n    127.0.0.1:6379> keys *\n\n发布到 Slack\n===================\n\n请参阅 `发布股票警报到 Slack 的 Jupyter 笔记本 \u003Chttps:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002Fcompose\u002Fdocker\u002Fnotebooks\u002FStock-Analysis-Intro-Publishing-to-Slack.ipynb>`__ 以获取最新的使用示例。\n\n将 FinViz 筛选器中的股票代码发布到 Slack\n----------------------------------------\n\n以下是用于测试 Slack 集成的示例代码。\n\n.. code-block:: python\n\n    import analysis_engine.finviz.fetch_api as fv\n    from analysis_engine.send_to_slack import post_df\n    # 简单的纽约证券交易所道琼斯指数金融板块，市盈率高于 5 的筛选器网址\n    url = 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=exch_nyse,fa_pe_o5,idx_dji,sec_financial&ft=4'\n    res = fv.fetch_tickers_from_screener(url=url)\n    df = res['rec']['data']\n\n    # 请确保 SLACK_WEBHOOK 环境变量已正确设置：\n    post_df(\n        df=df[SLACK_FINVIZ_COLUMNS],\n        columns=SLACK_FINVIZ_COLUMNS)\n\n在 Kubernetes 上运行\n=====================\n\nKubernetes 部署 - 引擎\n-------------------------------\n\n使用以下命令部署引擎：\n\n::\n\n    kubectl apply -f .\u002Fk8\u002Fengine\u002Fdeployment.yml\n\nKubernetes 作业 - 数据集收集\n-----------------------------------\n\n使用以下命令启动数据集收集作业：\n\n::\n\n    kubectl apply -f .\u002Fk8\u002Fdatasets\u002Fjob.yml\n\nKubernetes 部署 - Jupyter\n--------------------------------\n\n使用以下命令将 Jupyter 部署到 Kubernetes 集群：\n\n::\n\n    .\u002Fk8\u002Fjupyter\u002Frun.sh\n\n使用私有 Docker 注册表的 Kubernetes\n=========================================\n\n您可以按照以下步骤部署一个私有 Docker 注册表，以便从 Kubernetes 集群外部拉取镜像：\n\n#.  部署 Docker 注册表\n\n    ::\n\n        .\u002Fcompose\u002Fstart.sh -r\n\n#.  配置 Kubernetes 主机及其他 Docker 守护进程，使其信任不安全的注册表\n\n    ::\n\n        cat \u002Fetc\u002Fdocker\u002Fdaemon.json\n        {\n            \"insecure-registries\": [\n                \"\u003C运行注册表容器的主机的公有 IP 地址\u002FFQDN>:5000\"\n            ]\n        }\n\n#.  重启所有 Docker 守护进程\n\n    ::\n\n        sudo systemctl restart docker\n\n#.  从所有需要访问注册表的 Kubernetes 主机及其他守护进程登录到 Docker 注册表\n\n    .. note:: 您可以通过修改使用 ``trex`` 和 ``123321`` 作为凭据的 ``.\u002Fcompose\u002Fstart.sh`` 文件来更改默认注册表密码，或者编辑挂载卷文件 ``\u002Fdata\u002Fregistry\u002Fauth\u002Fhtpasswd``。以下是查找注册表默认登录设置的方法：\n\n        ::\n\n            grep docker compose\u002Fstart.sh  | grep htpass\n\n    ::\n\n        docker login \u003C运行注册表容器的主机的公有 IP 地址\u002FFQDN>:5000\n\n#.  为所有凭据设置 Kubernetes 秘密\n\n    根据您自己的存储桶、Docker 注册表和 Tradier 账户令牌设置每个字段：\n\n    ::\n\n        cat \u002Fopt\u002Fsa\u002Fk8\u002Fsecrets\u002Fsecrets.yml | grep SETYOUR\n        aws_access_key_id: SETYOURENCODEDAWSACCESSKEYID\n        aws_secret_access_key: SETYOURENCODEDAWSSECRETACCESSKEY\n        .dockerconfigjson: SETYOURDOCKERCREDS\n        td_token: SETYOURTDTOKEN\n\n#.  部署 Kubernetes 秘密\n\n    ::\n\n        kubectl apply -f \u002Fopt\u002Fsa\u002Fk8\u002Fsecrets\u002Fsecrets.yml\n\n#.  确认 Kubernetes 秘密已部署\n\n    ::\n\n        kubectl get secrets ae.docker.creds\n        NAME              TYPE                             DATA   AGE\n        ae.docker.creds   kubernetes.io\u002Fdockerconfigjson   1      4d1h\n\n    ::\n\n        kubectl get secrets | grep \"ae\\.\"\n        ae.docker.creds         kubernetes.io\u002Fdockerconfigjson        1      4d1h\n        ae.k8.aws.s3            Opaque                                3      4d1h\n        ae.k8.minio.s3          Opaque                                3      4d1h\n        ae.k8.tradier           Opaque                                4      4d1h\n\n#.  配置 Kubernetes 部署以使用外部私有 Docker 注册表\n\n    根据您的设置，将以下行添加到 Kubernetes 部署 YAML 文件中：\n\nimagePullSecrets:\n        - name: ae.docker.creds\n        containers:\n        - image: \u003C运行注册表容器的主机的公网IP地址或FQDN>:5000\u002Fmy-own-stock-ae:latest\n          imagePullPolicy: Always\n\n.. 提示:: 在花费了不少时间进行调试之后，请务必在应用从外部注册表拉取 Docker 镜像的新 Pod 之前先删除旧的 Pod。执行 ``kubectl delete pod \u003Cname>`` 命令后，再应用或创建新的 Pod，即可运行最新镜像。\n\n测试\n=======\n\n要启用调试和跟踪日志记录，请将 ``SHARED_LOG_CFG`` 导出为一个调试日志 JSON 文件。要为本库开启调试模式，可以使用以下命令将其导出到仓库中包含的文件：\n\n::\n\n    export SHARED_LOG_CFG=\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Flog\u002Fdebug-logging.json\n\n.. 注意:: 目前存在一个已知的 `pandas 问题 \u003Chttps:\u002F\u002Fgithub.com\u002Fpandas-dev\u002Fpandas\u002Fissues\u002F18141>`__，它会记录关于 _timelex 的警告信息，并且在 pandas 修复该问题之前，此警告将持续出现。请暂时忽略此警告。\n\n   ::\n\n        DeprecationWarning: _timelex 是一个私有类，可能会在未通知的情况下中断工作，未来版本中它将被移动或重命名。\n\n运行所有测试\n\n::\n\n    py.test --maxfail=1\n\n运行单个测试用例\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_publish_pricing_data\n\n发布测试\n---------------\n\nS3 上传\n---------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_s3_upload\n\n从 S3 发布到 Redis\n------------------------\n\n::\n\n    python -m unittest tests.test_publish_from_s3_to_redis.TestPublishFromS3ToRedis.test_success_publish_from_s3_to_redis\n\nRedis 缓存设置\n---------------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_success_redis_set\n\n准备数据集\n---------------\n\n::\n\n    python -m unittest tests.test_prepare_pricing_dataset.TestPreparePricingDataset.test_prepare_pricing_data_success\n\n测试算法将所有输入数据集保存到文件\n-------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_algo_can_save_all_input_datasets_to_file\n\n端到端集成测试\n==============================\n\n使用脚本启动所有容器，以进行完整的端到端集成测试（使用真实 Docker 容器）：\n\n::\n\n    .\u002Fcompose\u002Fstart.sh -a\n\n验证容器是否正在运行：\n\n::\n\n    docker ps | grep -E \"stock-analysis|redis|minio\"\n\n停止端到端堆栈：\n\n::\n\n    .\u002Fcompose\u002Fstop.sh\n    .\u002Fcompose\u002Fstop.sh -s\n\n集成单元测试\n=====================\n\n.. 注意:: 请在运行这些测试之前先启动 Redis 和 MinIO。\n\n请启用集成测试：\n\n::\n\n    export INT_TESTS=1\n\nRedis\n-----\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_redis_set\n\nS3 上传\n---------\n\n::\n\n    python -m unittest tests.test_publish_pricing_update.TestPublishPricingData.test_integration_s3_upload\n\n\n从 S3 发布到 Redis\n------------------------\n\n::\n\n    python -m unittest tests.test_publish_from_s3_to_redis.TestPublishFromS3ToRedis.test_integration_publish_from_s3_to_redis\n\nIEX 测试 - 获取所有数据集\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data\n\nIEX 测试 - 每日获取\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_daily\n\nIEX 测试 - 每分钟获取\n-----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_minute\n\nIEX 测试 - 获取统计数据\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_stats\n\nIEX 测试 - 获取同行数据\n----------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_peers\n\nIEX 测试 - 获取新闻\n---------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_news\n\nIEX 测试 - 获取财务数据\n---------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_financials\n\nIEX 测试 - 获取收益数据\n-------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_earnings\n\nIEX 测试 - 获取股息数据\n--------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_dividends\n\nIEX 测试 - 获取公司信息\n------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_fetch_company\n\nIEX 测试 - 获取财务数据辅助函数\n----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_fetch_data.TestIEXFetchData.test_integration_get_financials_helper\n\nIEX 测试 - 提取每日数据集\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_daily_dataset\n\nIEX 测试 - 提取每分钟数据集\n---------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_minute_dataset\n\nIEX 测试 - 提取报价数据集\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_quote_dataset\n\nIEX 测试 - 提取统计数据集\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_stats_dataset\n\nIEX 测试 - 提取同行数据集\n--------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_peers_dataset\n\nIEX 测试 - 提取新闻数据集\n-------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_news_dataset\n\nIEX 测试 - 提取财务数据集\n-------------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_financials_dataset\n\nIEX 测试 - 提取收益数据集\n-----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_earnings_dataset\n\nIEX 测试 - 提取股息数据集\n------------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_dividends_dataset\n\nIEX 测试 - 提取公司数据集\n----------------------------------\n\n::\n\n    python -m unittest tests.test_iex_dataset_extraction.TestIEXDatasetExtraction.test_integration_extract_company_dataset\n\nFinViz 测试 - 从筛选器 URL 获取股票代码\n---------------------------------------------\n\n::\n\npython -m unittest tests.test_finviz_fetch_api.TestFinVizFetchAPI.test_integration_test_fetch_tickers_from_screener\n\n或者使用代码：\n\n.. code-block:: python\n\n    import analysis_engine.finviz.fetch_api as fv\n    url = 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=exch_nyse&ft=4&r=41'\n    res = fv.fetch_tickers_from_screener(url=url)\n    print(res)\n\n算法测试\n=================\n\n算法测试——将输入数据集发布到 Redis\n--------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_dataset_to_redis\n\n算法测试——将输入数据集发布到文件\n-------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_dataset_to_file\n\n算法测试——从文件加载数据集\n-----------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_load_from_file\n\n算法测试——将算法就绪的数据集发布到 S3 并从 S3 加载\n-----------------------------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_s3_and_load\n\n算法测试——将算法就绪的数据集发布到 S3 并从 S3 加载\n-----------------------------------------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_publish_input_redis_and_load\n\n算法测试——从 Redis 数据库 0 中提取算法就绪的数据集，并加载到 Redis 数据库 1\n-----------------------------------------------------------------------------------------\n\n在 Redis 数据库之间复制数据集是集成测试的一部分。运行方法如下：\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_integration_algo_restore_ready_back_to_redis\n\n算法测试——测试文档示例\n--------------------------------------\n\n::\n\n    python -m unittest tests.test_base_algo.TestBaseAlgo.test_sample_algo_code_in_docstring\n\n准备数据集\n=================\n\n::\n\n    ticker=SPY\n    sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo\n\n调试\n=========\n\n测试算法\n----------\n\n运行算法最快的方式是指定一个 1 天的时间范围：\n\n::\n\n    sa -t SPY -s $(date +\"%Y-%m-%d) -n $(date +\"%Y-%m-%d\")\n\n测试任务\n----------\n\n大多数脚本都支持在没有 Celery 工作进程的情况下运行。要在同步模式下不使用工作进程运行，可以使用以下命令：\n\n::\n\n    export CELERY_DISABLED=1\n\n::\n\n    ticker=SPY\n    publish_from_s3_to_redis.py -t ${ticker} -u integration-tests -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n integration-test-v1\n    sa -t ${ticker} -f -o ${ticker}_latest_v1 -j prepared -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo\n    fetch -t ${ticker} -g all -e 2018-10-19 -u pricing -k trexaccesskey -s trex123321 -a localhost:9000 -r localhost:6379 -m 0 -n ${ticker}_demo -P 1 -N 1 -O 1 -U 1 -R 1\n    fetch -A scn -L 'https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o6,idx_sp500&ft=4|https:\u002F\u002Ffinviz.com\u002Fscreener.ashx?v=111&f=cap_midunder,exch_nyse,fa_div_o8,idx_sp500&ft=4'\n\n代码检查及其他工具\n-----------------------\n\n#.  代码检查\n\n    ::\n\n        flake8 .\n        pycodestyle .\n\n#.  Sphinx 文档\n\n    ::\n\n        cd docs\n        make html\n\n#.  Docker 管理——拉取最新镜像\n\n    ::\n\n        docker pull jayjohnson\u002Fstock-analysis-jupyter && docker pull jayjohnson\u002Fstock-analysis-engine\n\n#.  备份 Docker Redis 数据库\n\n    ::\n\n        \u002Fopt\u002Fsa\u002Ftools\u002Fbackup-redis.sh\n\n查看本地 Redis 备份的方法如下：\n\n    ::\n\n        ls -hlrt \u002Fopt\u002Fsa\u002Ftests\u002Fdatasets\u002Fredis\u002Fredis-0-backup-*.rdb\n\n#.  将 Kubernetes Redis 集群的数据库导出到本地 Redis 容器\n\n    #.  停止 Redis Docker 容器：\n\n        ::\n\n            .\u002Fcompose\u002Fstop.sh\n\n    #.  归档之前的 Redis 数据库\n\n        ::\n\n            cp \u002Fdata\u002Fredis\u002Fdata\u002Fdump.rdb \u002Fdata\u002Fredis\u002Fdata\u002Farchive.rdb\n\n    #.  保存集群中的 Redis 数据库\n\n        ::\n\n            kubectl exec -it redis-master-0 redis-cli save\n\n    #.  将 Pod 内保存的 Redis 数据库文件导出到默认 Docker Redis 容器的本地文件中\n\n        ::\n\n            kubectl cp redis-master-0:\u002Fbitnami\u002Fredis\u002Fdata\u002Fdump.rdb \u002Fdata\u002Fredis\u002Fdata\u002Fdump.rdb\n\n    #.  重启堆栈\n\n        .. note:: Redis 需要几秒钟才能将所有数据加载到内存中，因此这可能需要几秒钟时间\n\n        ::\n\n            .\u002Fcompose\u002Fstart.sh\n\n将分叉的功能分支部署到正在运行的容器中\n================================================\n\n在开发影响多个容器的功能时，无需重新下载或手动构建 Docker 镜像，即可部署您自己的功能分支。当容器正在运行时，您可以将自己的分叉分支作为新镜像部署（这些镜像会自动保存为新的 Docker 容器镜像）。\n\n将公共或私有分叉部署到正在运行的容器中\n-------------------------------------------------------\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh \u003Cgit 分叉 https uri> \u003C可选——分支名称（默认为 master）> \u003C可选——分叉仓库名称>\n\n示例：\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fstock-analysis-engine.git timeseries-charts jay\n\n将容器恢复到主分支\n-----------------------------------------\n\n通过以下命令将容器构建恢复到来自 https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine 的 ``master`` 分支：\n\n::\n\n    .\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine.git master upstream\n\n部署分叉别名\n-----------------\n\n以下是一个 bashrc 别名，用于快速从分叉的功能分支构建容器：\n\n::\n\n    alias bd='pushd \u002Fopt\u002Fsa >> \u002Fdev\u002Fnull && source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate && \u002Fopt\u002Fsa\u002Ftools\u002Fupdate-stack.sh https:\u002F\u002Fgithub.com\u002Fjay-johnson\u002Fstock-analysis-engine.git timeseries-charts jay && popd >> \u002Fdev\u002Fnull'\n\n调试 IEX 数据获取\n-----------------------\n\n::\n\n    ticker=\"SPY\"\n    use_date=$(date +\"%Y-%m-%d\")\n    source \u002Fopt\u002Fvenv\u002Fbin\u002Factivate\n    exp_date=$(\u002Fopt\u002Fsa\u002Fanalysis_engine\u002Fscripts\u002Fprint_next_expiration_date.py)\n    fetch -t ${ticker} -g iex -n ${ticker}_${use_date} -e ${exp_date} -Z\n\nTradier 数据获取失败\n----------------------------\n\n如果您在尝试从 Tradier 获取定价数据时遇到以下错误，请确保在您的 ``compose\u002Fenvs\u002F*.env`` Docker Compose 文件中导出了有效的 ``TD_TOKEN``：\n\n::\n\n2019-01-09 00:16:47,148 - analysis_engine.td.fetch_api - INFO - 获取put失败，响应=\u003CResponse [401]>，代码=401，文本=无效的访问令牌\n    2019-01-09 00:16:47,151 - analysis_engine.td.get_data - CRITICAL - 股票代码=TSLA-tdputs - 股票代码=TSLA 字段=10001 fetch_data 失败，异常='date'\n    2019-01-09 00:16:47,151 - analysis_engine.work_tasks.get_new_pricing_data - CRITICAL - 股票代码=TSLA TD获取失败，股票代码=TSLA 字段=tdputs 状态=ERR 错误=股票代码=TSLA-tdputs - 股票代码=TSLA 字段=10001 fetch_data 失败，异常='date'\n\n许可证\n=======\n\nApache 2.0 - 请参阅 LICENSE_ 以获取更多详细信息\n\n.. _License: https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fblob\u002Fmaster\u002FLICENSE\n\n常见问题解答\n===\n\n我可以用我的算法进行实盘交易吗？\n------------------------------------\n\n目前还不能。如果您想了解如何进行实盘交易，或者您已经有心仪的交易平台，请联系我们寻求帮助。\n\n我可以发布算法交易通知吗？\n--------------------------------------------\n\n目前，算法仅支持在发现买入或卖出交易机会时，将消息发布到私有的Slack频道中，供团队成员共享。如果您有自定义的聊天客户端应用或服务，并认为应该被支持，请随时与我们联系。\n\n服务条款\n================\n\n数据归属\n================\n\n本仓库目前使用 `Tradier \u003Chttps:\u002F\u002Ftradier.com\u002F>`__ 和 `IEX \u003Chttps:\u002F\u002Fiextrading.com\u002Fdeveloper\u002Fdocs\u002F>`__ 提供的行情数据。使用这些数据源需要遵守其服务条款中的相关协议。\n\nIEX Cloud\n=========\n\n- IEX 使用条款链接：`IEX's Terms of Use \u003Chttps:\u002F\u002Fiextrading.com\u002Fapi-exhibit-a>`__\n- 本仓库使用了 `IEX 实时行情 \u003Chttps:\u002F\u002Fiextrading.com\u002Fdeveloper>`__\n- IEX Cloud 是一个数据源，其额外的数据归属说明可在 https:\u002F\u002Fiextrading.com\u002Fdeveloper\u002Fdocs\u002F#attribution 上找到。\n\n添加Celery任务\n===================\n\n如果您想添加一个新的Celery任务，请将文件路径添加到以下位置的 WORKER_TASKS 中：\n\n- compose\u002Fenvs\u002Flocal.env\n- compose\u002Fenvs\u002F.env\n- analysis_engine\u002Fwork_tasks\u002Fconsts.py","# Stock Analysis Engine 快速上手指南\n\nStock Analysis Engine 是一个用于构建和调整投资策略的开源引擎，支持利用人工智能（深度神经网络）进行回测。它提供分布式架构，可从 IEX Cloud、Tradier 等数据源自动获取公开交易公司的实时定价、期权、新闻及财务数据。\n\n## 环境准备\n\n在开始之前，请确保您的系统满足以下要求：\n\n*   **操作系统**: Linux 或 macOS (Windows 用户建议使用 WSL2)。\n    *   *注意*: macOS 用户需注意 Docker Compose 的 `network_mode: \"host\"` 已知兼容性问题，可能导致服务连接困难。\n*   **核心依赖**:\n    *   [Docker](https:\u002F\u002Fwww.docker.com\u002F)\n    *   [Docker Compose](https:\u002F\u002Fdocs.docker.com\u002Fcompose\u002F)\n    *   [Git](https:\u002F\u002Fgit-scm.com\u002F)\n*   **数据源账户** (可选但推荐):\n    *   **IEX Cloud**: 需注册账户并获取 Token (按需计费)。\n    *   **Tradier**: 需注册开发者账户并获取 Token。\n    *   *注*: 默认已禁用 Yahoo Finance 数据源。\n\n## 安装步骤\n\n### 1. 克隆项目代码\n将仓库克隆到本地目录（示例使用 `\u002Fopt\u002Fsa`）：\n\n```bash\ngit clone https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine.git \u002Fopt\u002Fsa\ncd \u002Fopt\u002Fsa\n```\n\n### 2. 启动基础服务 (Redis & MinIO)\n运行以下命令拉取 Redis 和 MinIO 的 Docker 镜像并创建挂载点：\n\n```bash\n.\u002Fcompose\u002Fstart.sh -a\n```\n\n### 3. 配置数据源凭证\n如果您计划从 IEX Cloud 或 Tradier 获取数据，请设置相应的环境变量：\n\n```bash\n# 替换为您的实际 Token\nexport IEX_TOKEN=YOUR_IEX_TOKEN\nexport TD_TOKEN=YOUR_TRADIER_TOKEN\n```\n\n### 4. 获取最新定价数据\n使用 `fetch` 命令获取特定股票代码（例如 SPY）的数据：\n\n```bash\n# 从所有配置的数据源获取\nfetch -t SPY\n\n# 仅从 IEX Cloud 获取\nfetch -t SPY -g iex\n\n# 仅从 Tradier 获取\nfetch -t SPY -g td\n\n# 获取过去 30 天的分钟级历史数据 (仅限 IEX Cloud)\nbackfill-minute-data.sh SPY\n```\n\n### 5. 启动完整栈 (可选，用于实盘分析或完整回测)\n如果需要运行 Worker、回测器、Jupyter Notebook 等完整组件：\n\n```bash\n.\u002Fcompose\u002Fstart.sh\n```\n*该命令会自动拉取约 3.0 GB 的 `stock-analysis-engine` 镜像并启动所有相关服务。*\n\n## 基本使用\n\n### 验证数据缓存\n数据获取后会被压缩存储在 Redis 中，您可以使用以下命令验证：\n\n```bash\nredis-cli keys \"SPY_*\"\nredis-cli get \"\u003Ckey like SPY_2019-01-08_minute>\"\n```\n\n### 运行简单回测 (Python API)\n您可以使用 `AlgoRunner` API 加载最新数据并运行简单的策略回测。\n\n创建一个 Python 脚本（例如 `test_backtest.py`）：\n\n```python\nimport analysis_engine.algo_runner as algo_runner\nimport analysis_engine.plot_trading_history as plot\n\n# 初始化运行器，目标标的为 SPY\nrunner = algo_runner.AlgoRunner('SPY')\n\n# 获取最新的 200 分钟数据\ndf = runner.latest()\n\n# 打印最后 5 行的时间和收盘价\nprint(df[['minute', 'close']].tail(5))\n\n# 绘制交易历史图表\nplot.plot_trading_history(\n    title=(\n        f'SPY - ${df[\"close\"].iloc[-1]} at: '\n        f'{df[\"minute\"].iloc[-1]}'),\n    df=df)\n\n# 如需启动完整回测，取消下面注释\n# runner.start()\n```\n\n### 提取特定数据集\n您也可以直接从 Redis 缓存中提取特定类型的数据（如分钟线、看涨期权、看跌期权）：\n\n```python\nimport analysis_engine.extract as ae_extract\n\n# 提取 SPY 的分钟线及期权数据\nprint(ae_extract.extract(\n    'SPY',\n    datasets=['minute', 'tdcalls', 'tdputs']))\n\n# 提取特定日期的历史数据 (格式：YYYY-MM-DD)\nprint(ae_extract.extract(\n    'AAPL',\n    datasets=['minute', 'daily', 'financials', 'earnings', 'dividends'],\n    date='2019-02-15'))\n```\n\n### 进阶使用\n*   **Jupyter Notebook**: 项目内置了多个 Notebook 示例，可用于比较深度神经网络模型或自定义交易算法。\n*   **命令行回测**: 使用 `run_backtest_and_plot_history.py` 脚本配合配置文件运行复杂回测。\n*   **Kubernetes 部署**: 生产环境用户可参考 Helm Chart 指南在 K8s 集群中部署。","某量化团队正试图训练一个深度神经网络（DNN）来预测股票分钟级走势，需要海量高质量的历史回测数据作为燃料。\n\n### 没有 stock-analysis-engine 时\n- **数据源分散且手动整合难**：开发者需分别编写脚本对接 IEX、Tradier 等多个 API，处理格式不统一的行情、期权及新闻数据，耗时极易出错。\n- **回测规模受限**：本地单机难以并发运行数千种交易算法的回测，无法生成超过 1.5 亿行的历史交易记录，导致 AI 模型因样本不足而过拟合。\n- **基础设施部署复杂**：缺乏现成的容器化方案，手动配置 Redis 缓存、Minio 存储及 Kubernetes 集群环境门槛高，维护成本巨大。\n- **数据流转断层**：清洗后的数据无法自动发布到 S3 对象存储，阻碍了后续 AI 训练流水线的自动化衔接。\n\n### 使用 stock-analysis-engine 后\n- **多源数据自动聚合**：通过简单的命令行或 Docker 配置，即可自动从 IEX Cloud 和 Tradier 拉取包含分钟级价格、财报及筛选调度的统一数据集。\n- **大规模分布式回测**：依托 Kubernetes 和 docker-compose 架构，轻松并发回测 5000+ 种算法，快速生成亿级行情的标准化训练集。\n- **一键式环境交付**：利用内置的 Helm 指南和启动脚本，几分钟内即可在裸金属服务器或云端部署包含 Redis 和 Minio 的完整分析栈。\n- **训练数据无缝就绪**：系统自动将回测性能数据和数据集发布至 S3，直接为深度神经网络提供“开箱即用”的高质量输入。\n\nstock-analysis-engine 将原本数周的数据工程工作压缩至小时级，让量化团队能专注于核心 AI 策略的迭代而非底层数据基建。","https:\u002F\u002Foss.gittoolsai.com\u002Fimages\u002FAlgoTraders_stock-analysis-engine_8cfa0182.png","AlgoTraders","https:\u002F\u002Foss.gittoolsai.com\u002Favatars\u002FAlgoTraders_df542574.png",null,"https:\u002F\u002Fgithub.com\u002FAlgoTraders",[80,84,88,92,96,100,103],{"name":81,"color":82,"percentage":83},"Jupyter Notebook","#DA5B0B",78.7,{"name":85,"color":86,"percentage":87},"Python","#3572A5",19.4,{"name":89,"color":90,"percentage":91},"Shell","#89e051",1.7,{"name":93,"color":94,"percentage":95},"Smarty","#f0c040",0.1,{"name":97,"color":98,"percentage":99},"Dockerfile","#384d54",0,{"name":101,"color":102,"percentage":99},"Vim Script","#199f4b",{"name":104,"color":105,"percentage":99},"Makefile","#427819",1207,265,"2026-04-03T02:00:28",4,"Linux, macOS","未说明","未说明 (Docker 镜像约 3.0 GB)",{"notes":114,"python":111,"dependencies":115},"该工具主要基于 Docker 和 Docker Compose 部署，核心依赖 Redis 进行数据缓存和 Minio 进行对象存储。虽然支持深度学习（深度神经网络），但 README 未明确列出具体的 GPU 型号或 CUDA 版本要求。macOS 用户需注意 Docker Compose 的网络模式已知问题。运行完整栈需拉取约 3.0 GB 的 Docker 镜像。数据源需要 IEX Cloud 或 Tradier 的 API 密钥。",[116,117,118,119,120,121],"Docker","Docker Compose","Redis","Minio","Kubernetes (可选)","Helm (可选)",[13],[124,125,126,127,128,129,130,131,132,133,134,135,136,137,138,139,140,141,142,143],"algorithmic-trading","stocks","options","tensorflow","keras","kubernetes","iex","tradier","backtesting","docker","redis","minio","s3","deep-neural-networks","deep-learning","jupyter","deep-learning-tutorial","iexcloud","helm","helm-charts","2026-03-27T02:49:30.150509","2026-04-06T11:56:40.935625",[147,152,157,162,167],{"id":148,"question_zh":149,"answer_zh":150,"source_url":151},18776,"在 Ubuntu 上执行 'pip install -e .' 时遇到 'Permission denied' 错误怎么办？","这是因为当前用户没有权限创建 egg-info 目录。解决方法是使用 sudo 提升权限运行安装命令：\nsudo pip3 install -e .","https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fissues\u002F374",{"id":153,"question_zh":154,"answer_zh":155,"source_url":156},18777,"如何在本地启动 Docker 容器来评估 Worker 或运行 Jupyter？","项目已提供 v1 版本的 Docker 镜像，可以使用 docker-compose 分别启动:\n1. 启动 Worker 进行评估：\ndocker-compose -f compose\u002Fworkers.yml up\n2. 启动 Jupyter Notebook：\ndocker-compose -f compose\u002Fjupyter.yml up","https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fissues\u002F1",{"id":158,"question_zh":159,"answer_zh":160,"source_url":161},18778,"如何解决 Rook-Ceph OSD Pod 陷入 CrashLoopBackOff 循环的问题？","如果无法通过常规手段修复且日志显示绑定地址失败（Cannot assign requested address），可以尝试删除故障的 OSD Pod 并重启相关的引擎 Pod。如果问题持续存在且难以排查（可能与内核版本、K8s 版本或 Redis 版本有关），建议考虑迁移到更稳定的存储方案，例如 OpenEBS。\n相关仓库：https:\u002F\u002Fgithub.com\u002Fopenebs\u002Fopenebs","https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fissues\u002F371",{"id":163,"question_zh":164,"answer_zh":165,"source_url":166},18779,"使用 Helm 部署时，如何防止删除 ae-minio 导致数据丢失？","默认情况下，执行 'helm delete --purge ae-minio' 会同时删除 PVC 和 PV 导致数据丢失。解决方案有两种：\n1. （推荐）将 MinIO 部署在 Kubernetes 集群外部以避免持久化问题。\n2. 如果必须在 K8s 内部署，应使用命名的 PVC\u002FPV，并确保 Kubernetes 存储类配置为 'Retain'（保留）策略，这样即使 Helm release 被删除，底层数据卷也会被保留。","https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fissues\u002F356",{"id":168,"question_zh":169,"answer_zh":170,"source_url":171},18780,"在 macOS 上使用 Python 3.7 运行时遇到 Celery Redis 后端语法错误怎么办？","由于 Python 3.7 中 'async' 成为保留字导致兼容性问题，建议在 macOS 上改用 Python 3.6 运行环境。该项目已在 macOS High Sierra (10.13.6) 上验证 Python 3.6 可正常工作。请参考项目 README 中关于 macOS 安装的详细文档。","https:\u002F\u002Fgithub.com\u002FAlgoTraders\u002Fstock-analysis-engine\u002Fissues\u002F2",[]]