Comments
Loading Dream Comments...
You must be logged in to write a comment - Log In
ArtistA pink plushie cat holding a geiger counter near a 20-sided dice.
Two of the well-known AI techniques are FSM's (state machines) with Markov weightings, and Neural Nets. The Markov approach is easy to code and implement (see earlier drawing "Training Sheila" for example code to train it), whereas the Neural Net approach needs extensive matrix multiplication hardware (such as that inside a GPU card). However, the two strategies respond differently to superpositions and interpolations. For example, suppose we train our Markov FSM to emit "Oui" when we prompt it with "Yes", and "Non" when we prompt it with "No". This is a discontinuous distribution, since our FSM has no idea how to respond to say "Nes" or "Yo". In short, it can't generalise. Whereas a Neural Net (or the analog equivalent, an array of op-amps with appropriate weighting resistors) produces a linear superposition (think Schrodinger's Cat) of "Oui" and "Non" when prompted with "Nes" or "Yo".
Markov systems are still appropriate in certain circumstances, such as spelling checkers and Madlib bots, but may emit inappropriate responses where there is a gap in their training (the cat MUST be alive or dead, it cannot be a mixture of both), whereas the Neural Net will interpolate (fill in the discontinuity) when prompted with data it has no match for.
However a Neural Net is *far* more computationally intensive than a Markov Chain FSM (each node needs a matrix multiplication rather than just an index-and-branch), so development continueth :)