Lieberman (1993/2000) has a good short section on Neural Nets on page 511 (517-536 in the 2000 edition, with delta rules covered on pp. 522-3).

The heading for this section is —

THE NEURAL NETWORK SOLUTION (TO ASSOCIATION, ABSTRACTION, AND EVERYTHING.............)

On page 519 Lieberman outlines the 3 basic assumptions of a typical neural model: In outline, neural network models are surprisingly simple and rest on three basic assumptions:

1. Neural network. There is a network of neurons, with every neuron in the network connected to every other neuron.

2. Transmission. When one neuron in a network becomes active, this activity is transmitted to the other neurons in the network; the amount of excitation transmitted between any two neurons depends on the strength of the neural connection between them.

3. Learning. If two neurons within the network are active at the same time, the connection between them will be strengthened, so that future activity in one of these neurons will be more likely to produce activity in the other neuron.

Lieberman goes on to say that “In essence, these assumptions are virtually identical to those made by Pavlov, almost 100 years ago: When two cortical centers are active simultaneously, the connection between them will be strengthened.

Exactly the same thing could be said about Thorndike, who published a text containing many drawings of neurons of exactyle the same kind as reproduced on page 519 of Lieberman (2000) or in the Chapter by Crick and Asunama in Rumelhard and McClelland (1986).

Example of neurons in Thorndike (1919) | page where Thorndike refers to “association cells” and uses the term “synapsis”

After a detailed discussion, Lieberman concludes on page 531 that associative learning can be analysed at at least 3 levels – i) at the level of neurons or neural networks; ii) at the level of processes such as attention and varieties of memory; and iii) at the level of whole-system outcomes such as expectancies, concepts or cognitive maps.