Join our team! We have two openings: one for a digital tech librarian and one for a part-time astronomer.
quick field:
All Search Terms
Learning representations by back-propagating errors
Rumelhart, David E.; Hinton, Geoffrey E.; Williams, Ronald J.
We describe a new learning procedure, back-propagation, for networks of neurone-like units. The procedure repeatedly adjusts the weights of the connections in the network so as to minimize a measure of the difference between the actual output vector of the net and the desired output vector. As a result of the weight adjustments, internal `hidden' units which are not part of the input or output come to represent important features of the task domain, and the regularities in the task are captured by the interactions of these units. The ability to create useful new features distinguishes back-propagation from earlier, simpler methods such as the perceptron-convergence procedure1.

Publication:
Nature, Volume 323, Issue 6088, pp. 533-536 (1986).
Pub Date: October 1986DOI: 10.1038/323533a0 Bibcode: 1986Natur.323..533R

Feedback/Corrections?
© The SAO/NASA Astrophysics Data System
adshelp[at]cfa.harvard.edu
The ADS is operated by the Smithsonian Astrophysical Observatory under NASA Cooperative Agreement 80NSSC21M0056
Resources
About ADS
ADS Help
What's New
Careers@ADS
Accessibility
Social
@adsabs
ADS Blog
Project
Switch to basic HTML
Privacy Policy
Terms of Use
Smithsonian Astrophysical Observatory
Smithsonian Institution
NASA
About ADS What's New ADS Blog ADS Help Pages ADS Legacy Services Careers@ADS Sign UpLog In
Abstract Citations (1629) Co-Reads Similar Papers Metrics Export Citation