Overview
- Hidden layer
- Localized response to input
- Number of hidden units determined by clustering results
- Output layer
- Linear combination of the basis functions computed by hidden layer
- Learning is faster than BP
Application
- Classification
- Functional approximation
Gaussian Basis Function
$$ h_j = e^{-\frac{d_j^2}{2\sigma_j^2}}
$$
Algorithm
- $$X$$: input nodes
- $$h$$: hidden nodes
- $$y$$: output nodes
- $$d$$: desired/target output node
- $$V$$: input x hidden weights (centers of clusters)
- $$W$$: hidden x output weights
- $$m$$: number of input nodes
- $$n$$: number of output nodes
- Init weights
- Input x hidden
- Init by clustering algorithm e.g. SOFM
- Hidden x output
- Random init
- Input x hidden
- Activate
- Input x hidden
- $$h_j = e^{-\frac{(X - V_j)^T(X - V_j)}{2\sigma_j^2}}$$
- $$\sigmaj^2 = \frac{1}{m}\sum{i=1}^m (xi - v{ij})^T(xi - v{ij})$$
- Hidden x output
- $$yj = \sum{i=1}^n W_{ij}h_i$$
- Input x hidden
- Calculate weights
- Input x hidden
- SOFM algorithm
- Hidden x output
- LMS (least mean square)
- $$W'{ij} = W{ij} + \Delta W{ij} = W{ij} + \alpha \cdot (d_j - y_j) \cdot h_i$$
- Pseudo-inverse
- $$W = DH^+ = D(H^T(H \cdot H^T)^{-1})$$
- LMS (least mean square)
- Input x hidden