Overview

  • Hidden layer
    • Localized response to input
    • Number of hidden units determined by clustering results
  • Output layer
    • Linear combination of the basis functions computed by hidden layer
    • Learning is faster than BP

Application

  • Classification
  • Functional approximation

Gaussian Basis Function

$$ h_j = e^{-\frac{d_j^2}{2\sigma_j^2}}

$$

Algorithm

  • $$X$$: input nodes
  • $$h$$: hidden nodes
  • $$y$$: output nodes
  • $$d$$: desired/target output node
  • $$V$$: input x hidden weights (centers of clusters)
  • $$W$$: hidden x output weights
  • $$m$$: number of input nodes
  • $$n$$: number of output nodes

  1. Init weights
    1. Input x hidden
      1. Init by clustering algorithm e.g. SOFM
    2. Hidden x output
      1. Random init
  2. Activate
    1. Input x hidden
      1. $$h_j = e^{-\frac{(X - V_j)^T(X - V_j)}{2\sigma_j^2}}$$
      2. $$\sigmaj^2 = \frac{1}{m}\sum{i=1}^m (xi - v{ij})^T(xi - v{ij})$$
    2. Hidden x output
      1. $$yj = \sum{i=1}^n W_{ij}h_i$$
  3. Calculate weights
    1. Input x hidden
      1. SOFM algorithm
    2. Hidden x output
      1. LMS (least mean square)
        1. $$W'{ij} = W{ij} + \Delta W{ij} = W{ij} + \alpha \cdot (d_j - y_j) \cdot h_i$$
      2. Pseudo-inverse
        1. $$W = DH^+ = D(H^T(H \cdot H^T)^{-1})$$

results matching ""

    No results matching ""