- Updated README.md with project overview, key features, directory structure, getting started guide, and modernization roadmap. - Added AI_NETWORK.md detailing the neural network and AI/ML infrastructure, including architecture, components, usage patterns, and next steps. - Introduced DATABASE.md for the Database module, outlining key components, design highlights, usage patterns, and future enhancements. - Created README.md files for Enumerations, Expert, Money, Signals, Structures, System, Trailing, Variables directories, detailing their purpose, key components, and integration notes. - Documented the Signals subsystem, emphasizing modularity, extensibility, and AI/ML readiness. - Added comprehensive descriptions for individual signal modules in Signals/ directory. - Established clear integration notes and recommendations for future improvements across all modules.
4.6 KiB
4.6 KiB
AI/ML Network Module Documentation
Overview
This document describes the architecture, design, and usage of the neural network and AI/ML infrastructure implemented in the Warrior_EA project. The core logic is implemented in AI/Network.mqh (MQL5) and accelerated with OpenCL kernels in AI/Network.cl.
1. Network.mqh (MQL5 Neural Network Framework)
Purpose
- Implements a modular, extensible neural network framework for use in trading signal generation, optimization, and adaptive money management.
- Supports multiple neuron types: standard, convolutional, pooling, and LSTM (recurrent) neurons.
- Provides both CPU and GPU (OpenCL) execution paths.
- Designed for integration with the EA's signal, filter, and scoring modules.
Key Components
- CConnection / CArrayCon: Represent weighted connections between neurons, with support for momentum and Adam optimizers.
- CNeuronBase / CNeuron / CNeuronPool / CNeuronConv / CNeuronLSTM: Hierarchy of neuron types, supporting feedforward, convolutional, pooling, and LSTM logic.
- CLayer / CArrayLayer: Layers of neurons, supporting flexible network topologies.
- CNet: The main neural network class, orchestrating layer construction, forward/backward propagation, and persistence.
- CNeuronBaseOCL: GPU-accelerated neuron implementation using OpenCL buffers and kernels.
- Enumerations: Activation functions (NONE, TANH, SIGMOID), optimizers (SGD, ADAM), buffer types.
Design Highlights
- OpenCL Integration: Uses
#resource "Network.cl"for GPU acceleration. Automatically selects CPU/GPU path based on layer type. - Extensible Neuron Types: Easily add new neuron types (e.g., attention, transformer) by extending the base classes.
- Persistence: Supports saving/loading network weights and structure to binary files for training and deployment.
- Dynamic Feature Configuration: Designed to allow dynamic selection of input features (price, volume, indicators) for ML pipelines.
Usage Patterns
- Construct network via
CNetwith a description array ofCLayerDescriptionobjects. - Call
feedForward()with input features, thenbackProp()with target outputs for training. - Use
Save()/Load()for persistence.
Known Issues & Recommendations
- Parameter Redefinition: Some parameters (e.g.,
eta,momentum) are defined both as macros and variables—refactor for clarity. - Monolithic Structure: Consider splitting into smaller files by neuron type or functionality for maintainability.
- Unit Testing: No unit tests present—add tests for each neuron/layer type and for persistence logic.
- Documentation: This file now serves as the canonical reference for the AI/ML subsystem.
2. Network.cl (OpenCL Kernels)
Purpose
- Provides GPU-accelerated kernels for neural network operations: feedforward, output/hidden gradient calculation, and weight updates (Momentum/Adam).
Key Kernels
FeedForward: Computes neuron activations for a layer.CaclOutputGradient: Calculates output layer gradients.CaclHiddenGradient: Calculates hidden layer gradients.UpdateWeightsMomentum: SGD with momentum.UpdateWeightsAdam: Adam optimizer.
Design Notes
- Uses
double4vectorization for performance. - Handles activation functions (tanh, sigmoid) and edge cases for stability.
- Designed to be called from MQL5 via the OpenCL API wrappers in
Network.mqh.
3. Integration Points
- Signals: Neural networks are used in AI-driven signal modules (e.g., PAI, CONV, LSTM signals).
- Database: Supports saving/loading trained weights for persistent learning.
- Configurable Features: Designed to allow dynamic selection of input features (price, volume, indicators).
4. Next Steps
- Refactor monolithic code into smaller, testable modules.
- Add unit and integration tests for all neural network components.
- Expand documentation for each neuron/layer type and kernel.
- Continue documenting all files in the workspace as per the project plan.
Workspace Documentation Progress
Documented Files
- AI/Network.mqh: Complete
- AI/Network.cl: Complete
Next Steps
- All files in AI/ are now documented.
- The following directories and files have been enumerated for documentation:
- Database/
- Enumerations/
- Expert/
- Money/
- Signals/
- Structures/
- System/
- Trailing/
- Variables/
- Warrior_EA.mq5
- Warrior_EA.mqproj
- README.md
Documentation will proceed systematically through each folder and file, updating this record as progress continues.
This section will be updated as each file is documented.