Erstellt von Max The Mooshroom
vor mehr als 5 Jahre
|
||
Frage | Antworten |
Simplest neuron; a single convolution | |
Matrix Neural Net with 3 rules: 1. All nodes are fully connected. 2. Activation flows from input to output, without back loops. 3. There is one hidden layer. | |
Feed forward networks that use radial basis function as activation instead of logistic ones. Instead of answering on a scale of "yes to no", radial basis functions answer the question “how far are we from the target?" | |
Feed forward neural networks with more than one hidden layer. Because of that stacking more layers led to exponential growth of training times, making DFFs quite impractical. Only in early 00s we developed a bunch of approaches that allowed to train DFFs effectively. | |
FNN where each of its hidden cell received it’s own output with fixed delay of one or more iterations. Context based - decisions from past iterations or samples can influence current ones. | |
Memory Cell | A cell that can process data when the data has time gaps. (EG lag) composed of a couple of elements called gates that are recurrent and control how information is being remembered and forgotten. |
A recurrent neural net that uses memory cells in its hidden layers. | |
LSTM with variant gating. A lack of output gate makes it easier to repeat the same output for a concrete input multiple times. All LSTM gates are combined into an update gate, and reset gate is closely tied to input. They're less resource consuming than LSTMs and almost as effective. | |
Used for classification, clustering and feature compression. Can be trained without supervision. When the number of hidden cells is smaller than the input cells, and the number of output cells equals number of input cells, and when the it is trained the way the output is as close to input as possible, forces AEs to generalise data and search for common patterns. |
Möchten Sie mit GoConqr kostenlos Ihre eigenen Karteikarten erstellen? Mehr erfahren.