103 lines
4.6 KiB
Markdown
103 lines
4.6 KiB
Markdown
|
|
# AI/ML Network Module Documentation
|
||
|
|
|
||
|
|
## Overview
|
||
|
|
This document describes the architecture, design, and usage of the neural network and AI/ML infrastructure implemented in the Warrior_EA project. The core logic is implemented in `AI/Network.mqh` (MQL5) and accelerated with OpenCL kernels in `AI/Network.cl`.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 1. Network.mqh (MQL5 Neural Network Framework)
|
||
|
|
|
||
|
|
### Purpose
|
||
|
|
- Implements a modular, extensible neural network framework for use in trading signal generation, optimization, and adaptive money management.
|
||
|
|
- Supports multiple neuron types: standard, convolutional, pooling, and LSTM (recurrent) neurons.
|
||
|
|
- Provides both CPU and GPU (OpenCL) execution paths.
|
||
|
|
- Designed for integration with the EA's signal, filter, and scoring modules.
|
||
|
|
|
||
|
|
### Key Components
|
||
|
|
- **CConnection / CArrayCon**: Represent weighted connections between neurons, with support for momentum and Adam optimizers.
|
||
|
|
- **CNeuronBase / CNeuron / CNeuronPool / CNeuronConv / CNeuronLSTM**: Hierarchy of neuron types, supporting feedforward, convolutional, pooling, and LSTM logic.
|
||
|
|
- **CLayer / CArrayLayer**: Layers of neurons, supporting flexible network topologies.
|
||
|
|
- **CNet**: The main neural network class, orchestrating layer construction, forward/backward propagation, and persistence.
|
||
|
|
- **CNeuronBaseOCL**: GPU-accelerated neuron implementation using OpenCL buffers and kernels.
|
||
|
|
- **Enumerations**: Activation functions (NONE, TANH, SIGMOID), optimizers (SGD, ADAM), buffer types.
|
||
|
|
|
||
|
|
### Design Highlights
|
||
|
|
- **OpenCL Integration**: Uses `#resource "Network.cl"` for GPU acceleration. Automatically selects CPU/GPU path based on layer type.
|
||
|
|
- **Extensible Neuron Types**: Easily add new neuron types (e.g., attention, transformer) by extending the base classes.
|
||
|
|
- **Persistence**: Supports saving/loading network weights and structure to binary files for training and deployment.
|
||
|
|
- **Dynamic Feature Configuration**: Designed to allow dynamic selection of input features (price, volume, indicators) for ML pipelines.
|
||
|
|
|
||
|
|
### Usage Patterns
|
||
|
|
- Construct network via `CNet` with a description array of `CLayerDescription` objects.
|
||
|
|
- Call `feedForward()` with input features, then `backProp()` with target outputs for training.
|
||
|
|
- Use `Save()`/`Load()` for persistence.
|
||
|
|
|
||
|
|
### Known Issues & Recommendations
|
||
|
|
- **Parameter Redefinition**: Some parameters (e.g., `eta`, `momentum`) are defined both as macros and variables—refactor for clarity.
|
||
|
|
- **Monolithic Structure**: Consider splitting into smaller files by neuron type or functionality for maintainability.
|
||
|
|
- **Unit Testing**: No unit tests present—add tests for each neuron/layer type and for persistence logic.
|
||
|
|
- **Documentation**: This file now serves as the canonical reference for the AI/ML subsystem.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 2. Network.cl (OpenCL Kernels)
|
||
|
|
|
||
|
|
### Purpose
|
||
|
|
- Provides GPU-accelerated kernels for neural network operations: feedforward, output/hidden gradient calculation, and weight updates (Momentum/Adam).
|
||
|
|
|
||
|
|
### Key Kernels
|
||
|
|
- `FeedForward`: Computes neuron activations for a layer.
|
||
|
|
- `CaclOutputGradient`: Calculates output layer gradients.
|
||
|
|
- `CaclHiddenGradient`: Calculates hidden layer gradients.
|
||
|
|
- `UpdateWeightsMomentum`: SGD with momentum.
|
||
|
|
- `UpdateWeightsAdam`: Adam optimizer.
|
||
|
|
|
||
|
|
### Design Notes
|
||
|
|
- Uses `double4` vectorization for performance.
|
||
|
|
- Handles activation functions (tanh, sigmoid) and edge cases for stability.
|
||
|
|
- Designed to be called from MQL5 via the OpenCL API wrappers in `Network.mqh`.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 3. Integration Points
|
||
|
|
- **Signals**: Neural networks are used in AI-driven signal modules (e.g., PAI, CONV, LSTM signals).
|
||
|
|
- **Database**: Supports saving/loading trained weights for persistent learning.
|
||
|
|
- **Configurable Features**: Designed to allow dynamic selection of input features (price, volume, indicators).
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
## 4. Next Steps
|
||
|
|
- Refactor monolithic code into smaller, testable modules.
|
||
|
|
- Add unit and integration tests for all neural network components.
|
||
|
|
- Expand documentation for each neuron/layer type and kernel.
|
||
|
|
- Continue documenting all files in the workspace as per the project plan.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
# Workspace Documentation Progress
|
||
|
|
|
||
|
|
## Documented Files
|
||
|
|
- AI/Network.mqh: Complete
|
||
|
|
- AI/Network.cl: Complete
|
||
|
|
|
||
|
|
## Next Steps
|
||
|
|
- All files in AI/ are now documented.
|
||
|
|
- The following directories and files have been enumerated for documentation:
|
||
|
|
- Database/
|
||
|
|
- Enumerations/
|
||
|
|
- Expert/
|
||
|
|
- Money/
|
||
|
|
- Signals/
|
||
|
|
- Structures/
|
||
|
|
- System/
|
||
|
|
- Trailing/
|
||
|
|
- Variables/
|
||
|
|
- Warrior_EA.mq5
|
||
|
|
- Warrior_EA.mqproj
|
||
|
|
- README.md
|
||
|
|
|
||
|
|
Documentation will proceed systematically through each folder and file, updating this record as progress continues.
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
*This section will be updated as each file is documented.*
|