Nothing is more practical than a good theory
Recent Updates Toggle Comment Threads  Keyboard Shortcuts

wangxinxi
Advertisements 
wangxinxi
WTF, how could you let Hank die? W. W. you are completely a disgusting, greedy, and cowardly loser.

wangxinxi
An interesting paper about determinism, realism, and locality.
http://www.yaronhadad.com/wpcontent/uploads/2012/02/term_paper_yaron_hadad1.pdf 
wangxinxi
pickle.dump(model, open(sys.argv[1], ‘wb’))
SystemError: error return without exception setThe error can be reproduced by the following code:
python2.7 c “import cPickle; cPickle.dumps(‘\x00’ * 2**31)”Changing cPickle to pickle may solve this problem if you do not care about the efficiency issue of pickle. To completely solve the issue, you need to switch to Python 3. Using numpy.save/HDF5 maybe good choices too.

wangxinxi
An efficient estimator is the one that achieves the equality of the CramerRao bound (CRB)
The CramerRao bound says that the variance of an unbiased estimator cannot be smaller than the inverse of the Fisher information. However, a bias estimator can have smaller variance than the CramerRao bound, which is also known as the biasvariance tradeoff. 
wangxinxi
A major attraction of the connectionist approach to language, apart from its natural relation to neural computation, is that the very same processing mechanisms apply across the full range of linguistic structure.
Douglas L. T. Rohde, David C. Plaut, Connectionist Models of Language Processing

wangxinxi

wangxinxi
A good thread about some philosophical questions on PACBayes, Occam’s Razor, and Bayesian priors.
http://philtcs.wordpress.com/2011/11/03/class8occamsrazortheuniversalpriorandpaclearning/An introduction to PACBayesian learning theory.
http://people.kyb.tuebingen.mpg.de/seldin/ICML_PACBayes_Tutorial.pdfPACBayes bounds are a generalization of the Occam’s razor bound for algorithms which output a distribution over classifiers rather than just a single classifier. This includes the possibility of a distribution over a single classifier, so it is a generalization. Most classifiers do not output a distribution over base classifiers. Instead, they output either a classifier, or an average over base classifiers. Nonetheless, PACBayes bounds are interesting for several reasons:
PACBayes bounds are much tighter (in practice) than most common VCrelated approaches on continuous classifier spaces. This can be shown by application to stochastic neural networks (see section 13) as well as other classifiers. It also can be seen by observation: when specializing the PACBayes bounds on discrete hypothesis spaces, only O(lnm) sample complexity is lost.
Due to the achievable tightness, the result motivates new learning algorithms which strongly limit the amount of overfitting that a learning algorithm will incur.
The result found here will turn out to be useful for averaging hypotheses.http://hunch.net/~jl/projects/prediction_bounds/thesis/mathml/thesisch6.xml
http://vserver1.cscs.lsa.umich.edu/~crshalizi/notebooks/learningtheory.html

wangxinxi
A lot of philosophical discussions about latent variable models in this text.
http://dare.uva.nl/document/195746 
wangxinxi
rationalweighted recurrent NNs having boolean activation functions (simple thresholds) are equivalent to finite state automata (Minsky, “Computation: finite and infinite machines”, 1967);
rationalweighted recurrent NNs having linear sigmoid activation functions are equivalent to Turing Machines (Siegelmann and Sontag, “On the computational power of neural nets”, 1995);
realweighted recurrent NNs having linear sigmoid activation functions are more powerful than Turing Machines (Siegelmann and Sontag, “Analog computation via neural networks”, 1993);
but …
realweighted recurrent NNs with Gaussian noise on the outputs cannot recognize arbitrary regular languages (Maass and Sontag, “Analog Neural Nets with Gaussian or Other Common NoiseDistributions Cannot Recognize Arbitrary Regular Languages” , 1995);
Some interesting slides:
http://gjclokhorst.nl/hypercomputation.helsinki.pdf
http://homepages.ipact.nl/~lokhorst/hypercomputationUCL.pdfTo conclude: there is probably no real Turing machine (TM) in reality. There is also unlikely any “accurate” analog neural nets. Proving that the analog neural nets have better computation capability than TM is completely OK, but they all are just not implementable.
Reply