Berthier, N. E., Rosenstein, M. T., & Barto, A. G. (2005). Approximate optimal control as a model for motor learning. Psychological Review, 112(2), 329-346.
Current models of psychological development rely heavily on connectionist models that use supervised learning. These models adapt network weights when the network output does not match the target outputs computed by some agent. The authors present a model of motor learning in which the child uses exploration to discover appropriate ways of responding. The model is consistent with what is known about how neural systems evaluate behavior. The authors model the development of reaching and investigate N. Bernstein's (1967) hypotheses about early motor learning. Simulations show the course of learning as well as model the kinematics of reaching by a dynamical arm. [They say infants learn from successes as well as from failures and they use a model which they note corresponds to reinforcement learning]
Hazy, T. E., Frank, M. J., & O'Reilly, R. C. (2006). Banishing the homunculus: Making working memory work. Neuroscience, 139(1), 105-118.
The prefrontal cortex has long been thought to subserve both working memory and "executive" function, but the mechanistic basis of their integrated function has remained poorly understood, often amounting to a homunculus. This paper reviews the progress in our laboratory and others pursuing a long-term research agenda to deconstruct this homunculus by elucidating the precise computational and neural mechanisms underlying these phenomena. We outline six key functional demands underlying working memory, and then describe the current state of our computational model of the prefrontal cortex and associated systems in the basal ganglia (BG). The model, called PBWM (prefrontal cortex, basal ganglia working memory model), relies on actively maintained representations in the prefrontal cortex, which are dynamically updated/gated by the basal ganglia. It is capable of developing human-like performance largely on its own by taking advantage of powerful reinforcement learning mechanisms, based on the midbrain dopaminergic system and its activation via the basal ganglia and amygdala. These learning mechanisms enable the model to learn to control both itself and other brain areas in a strategic, task-appropriate manner. The model can learn challenging working memory tasks, and has been corroborated by several important empirical studies. (C) 2005 Published by Elsevier Ltd on behalf of IBRO.