Comments
Loading Dream Comments...
You must be logged in to write a comment - Log In
ArtistA schematic diagram showing two columns of op amps. Each output of the op amps in the first column is connected through a resistor to the summing junction of each of the op amps in the second column.
The picture shows (rather vaguely) two columns of op amps. Each output of the first column is connected via a weighting resistor to the summing junction of each of the op amps in the second column. This forms the analog version of a neural network. The individual gains (weighting resistors) are trainable in order to construct the model (e.g. a LLM -- large language model). In the digital version, the weighting resistors are replaced by matrix multiplications. i.e. (inputs) <-- (outputs) +.* (weightings). Thus training can easily adjust the weighting matrix. This is normally done on a GPU card, whic is designed for parallel matrix operations (SIMD or MMX). In a MSM (Markov State Machine), as used in ::SHE+ILA::, the op amps are replaced by virtual ALU's, implemented as states or coprocesses (threads), and the weightings are replaced by probabilities of a given branch between states being taken. (Cf quantum physics). This is sort of like a serial version of a NN (neural network) since the stimuli (e.g. prompt strings) propagate through the network of states step-by-step rather than in parallel (cf. RS232 versus Centronics Parallel). MSM technology has however been successfully used in spell-checkers, word prediction, pattern/speech recognisers, compilers (e.g. ::SHE+ILA::), and some language models. E&OE.