Comments
Loading Dream Comments...
You must be logged in to write a comment - Log In
In the previous drawing, we were talking about how op-amps (operational amplifiers, the essential component of an *analog* computer) can simulate the behaviour of neurons in the brain. Essentially, an op-amp (as used in the 1970's analog music synthesiser pictured) has two inputs (one inverted and one non-inverted) and an output. A "feedback network" connects the output back to the inverting input. Typically this is a resistor, though if we want to make a filter or oscillator we may use capacitors or other electronics in the feedback path. Now, to make an ordinary amplifier, we have a feedback resistor of say 100k ohms, and a set of input resistors of say 1k ohms each. This gives a "gain" (amplification amount) of 100 : that's the ratio between the feedback resistor and the input resistor. Having several input resistors lets us "mix" several simultaneous signals -- very useful in a music studio, where we want have several microphones (vocalists, guitars, drums, etc). In a neural network these inputs are said to be "weighted", i.e. the lower value resistors provide more current to the op-amp and therefore make the signal "louder" or more important. Now the great problem with analog neural nets, is that they can't tweak their own input pots (and therefore "learn" like the brain can). So we have an algorithm called "back-propagation" which is trained on sample data. This adjusts the weightings. Digital computers evolved due to this limitation of analog computers. In digital computers the "data" is held in RAM chips and ROM chips, which holds these "weightings" as "matrices" or lists of numbers. We then simulate the op-amp by multiplying (arithmetically) the numbers representing the inputs, by those representing the "weightings". This is slower than analog, but far more precise, and solves the problem of needing a "ghost in the machine" to twiddle the pots.