One of the most fundamental generative neural networks architectures is for autoencoders, and today we consider the Hopfield neural network and the (restricted) Boltzmann machine (RBM) The difference between the two, which is Geoffrey Hinton's famous invention, is that the Boltzmann machine and RBM have latent variables, whereas the Hopfield neural network only has one kind of node. Within the Boltzmann machine or RBM, there are ONLY "visible" and "hidden" nodes - there are no "input" or "output" nodes (as there are in a Multilayer Perceptron); any notion of "input" and "output" is something that we mentally envision as we create our training data - but they do not appear within the architecture itself!
コメント