Overview

A direct estimated computation of the equation parameters using pseudo-inverse. Optimal in the sense of mean square error.

  • Activation function must be continuous invertible functions
  • Number of nodes in input = number of nodes in the first hidden layer
  • Applies the whole training set to its algorithm at once

Moore-Penrose Pseudoinverse

$$\begin{aligned} H &\in \mathbb{R}^{m \times n} \ H^+ &= (H^TH)^{-1}H^T &\text{ if } m > n\ H^+ &= H^T(HH^T)^{-1} &\text{ if } m < n \end{aligned}$$

Algorithm

  1. Output normalizatoin
    1. $$C{req} = \frac{C{req}}{max(C_{req})}$$
  2. Calculate $$W_{init}$$
    1. Set $$B = A, C = C_{req}$$
    2. $$f(AW{init}) = C{req}\W{init} = A^+f^{-1}(C{req})$$
  3. Calculate $$B_{req}$$
    1. Set $$C = C{req}, W = W{init}$$
    2. $$f(B{req}W{init}) = C{req}\B{req} = f^{-1}(C{req})W{init}^+$$
  4. Normalize $$B_{req}$$ to ensure they are within the valid range of the activation function
    1. $$B{req} = \frac{B{req} * 0.99}{max(B_{req})}$$
  5. Calculate $$V$$
    1. Set $$B = B_{req}$$
    2. $$f(AV) = B{req}\V = A^+f^{-1}(B{req})$$
  6. Calculate actual output $$B$$
    1. $$B = f(AV)$$
  7. Recalculate $$W$$
    1. $$W = B^+f^{-1}(C_{req})$$
  8. Use the derived $$V$$ and $$W$$ to obtain outputs given inputs $$A$$
    1. $$B = f(AV)$$
    2. $$C = kf(BW)$$, $$k$$ is the normalization factor

Enhanced Input Representation

Increase number of neurons in the hidden layer.

results matching ""

    No results matching ""