Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making

Title:
Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making
Authors:
McMillen, Tyler; Simen, Patrick; Behseta, Sam
Abstract:
Optimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons’ broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multihypothesis sequential probability ratio test.
Citation:
McMillen, T., Patrick Simen, and S. Behseta. 2011. "Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making." Neural Networks 24: 417-426.
Publisher:
Elsevier
DATE ISSUED:
2011
Department:
Neuroscience
Type:
article
PUBLISHED VERSION:
10.1016/j.neunet.2011.01.005
PERMANENT LINK:
http://hdl.handle.net/11282/309430

Full metadata record

DC FieldValue Language
dc.contributor.authorMcMillen, Tyleren_US
dc.contributor.authorSimen, Patricken_US
dc.contributor.authorBehseta, Samen_US
dc.date.accessioned2013-12-23T16:09:28Z-
dc.date.available2013-12-23T16:09:28Z-
dc.date.issued2011en
dc.identifier.citationMcMillen, T., Patrick Simen, and S. Behseta. 2011. "Hebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision making." Neural Networks 24: 417-426.en_US
dc.identifier.issn0893-6080en_US
dc.identifier.urihttp://hdl.handle.net/11282/309430-
dc.description.abstractOptimal performance and physically plausible mechanisms for achieving it have been completely characterized for a general class of two-alternative, free response decision making tasks, and data suggest that humans can implement the optimal procedure. The situation is more complicated when the number of alternatives is greater than two and subjects are free to respond at any time, partly due to the fact that there is no generally applicable statistical test for deciding optimally in such cases. However, here, too, analytical approximations to optimality that are physically and psychologically plausible have been analyzed. These analyses leave open questions that have begun to be addressed: (1) How are near-optimal model parameterizations learned from experience? (2) What if a continuum of decision alternatives exists? (3) How can neurons’ broad tuning curves be incorporated into an optimal-performance theory? We present a possible answer to all of these questions in the form of an extremely simple, reward-modulated Hebbian learning rule by which a neural network learns to approximate the multihypothesis sequential probability ratio test.en_US
dc.publisherElsevieren_US
dc.identifier.doi10.1016/j.neunet.2011.01.005-
dc.subject.departmentNeuroscienceen_US
dc.titleHebbian learning in linear-nonlinear networks with tuning curves leads to near-optimal, multi-alternative decision makingen_US
dc.typearticleen_US
dc.identifier.journalNeural Networksen_US
dc.identifier.volume24en_US
dc.identifier.startpage417en_US
All Items in The Five Colleges of Ohio Digital Repository are protected by copyright, with all rights reserved, unless otherwise indicated.