rational-weighted recurrent NNs having boolean activation functions (simple thresholds) are equivalent to finite state automata (Minsky, “Computation: finite and infinite machines”, 1967);

rational-weighted recurrent NNs having linear sigmoid activation functions are equivalent to Turing Machines (Siegelmann and Sontag, “On the computational power of neural nets”, 1995);

real-weighted recurrent NNs having linear sigmoid activation functions are more powerful than Turing Machines (Siegelmann and Sontag, “Analog computation via neural networks”, 1993);

but …

real-weighted recurrent NNs with Gaussian noise on the outputs cannot recognize arbitrary regular languages (Maass and Sontag, “Analog Neural Nets with Gaussian or Other Common NoiseDistributions Cannot Recognize Arbitrary Regular Languages” , 1995);

Some interesting slides:

http://gjclokhorst.nl/hypercomputation.helsinki.pdf

http://homepages.ipact.nl/~lokhorst/hypercomputationUCL.pdf

To conclude: there is probably no real Turing machine (TM) in reality. There is also unlikely any “accurate” analog neural nets. Proving that the analog neural nets have better computation capability than TM is completely OK, but they all are just not implementable.

### Like this:

Like Loading...

*Related*

## Min 16:14

onJune 20, 2014 Permalink |interesting, what’s difference between rational weighted and real weighted.

## wangxinxi 16:24

onJune 20, 2014 Permalink |Rational numbers can be represented with finite memory, but real numbers in general, eg. Pi, cannot.