注册 登录
湿热一瞬间 返回首页

-s6的个人空间 https://www.shireyishunjian.com/main/?220791 [收藏] [复制] [分享] [RSS] 小马是真理,道路与生命

日志

Compare and contrast the biological plausibility of alternative error correction ...

已有 127 次阅读2023-4-26 22:05 |个人分类:论文

The basic principle of an error correction network is to minimize an error value. This error value is typically computed by the difference (or, typically, the squared difference) between the current output pattern and the desired output pattern. An error correction network then adjusts the weights in a way that minimizes the error. For example, the delta rule is defined as

∆w=αy-yx

In other words, the direction of the weight change depends on whether the current output value of that neuron is too high or too low. The degree of change depends on the degree of innervation of the current input neuron to that output neuron. The result is that the system actively seeks the input neuron that most effectively predicts the output neuron activity, and adjusts its weight in a way that leads to the desired output activity.

Error correction learning by itself is very powerful, as it directly uses a performance measure (minimizing the error) as the learning rule. All other forms of learning that could improve the system’s performance, such as that of pattern associators and competitive networks, could theoretically be approximated by an error correction network. The approach of using the loss function as the learning rule has led to impressive results in the field of deep learning. However, the utility of an error-correction learning rule in a biological system hinges on the origin of the error signal.

Consider a pattern associator with a Hebbian learning rule. The function of this pattern associator can be easily performed by an error correction network. However, the Hebbian learning rule only requires two factors, presynaptic activation and postsynaptic activation. Both of these factors are available at the synaptic cleft, and their resulting long-term potentiation has been confirmed in in vitro studies. An error correction network on the other hand also requires the computation of an error term. This is technically possible, but it relies on at least another interneuron and top-down projections. This architecture is also rarely found in the biological neural system. It appears to be the case that the neural system uses either Hebb-like learning rules or arbitrary non-adjustable connections (such as the lateral inhibitions of competitive networks) in most cases, and only incorporates error correction learning when an error signal is readily available.

One case in which this error signal is readily available is in the motor system. Fine motor control does not solely rely on a motor execution signal from the motor cortex. The gain of the signal to each individual body part needs to be calibrated based on various factors such as resistance and the condition of the muscle, which is not available to the motor cortex. The only information available is the visual, vestibular and proprioceptive feedback. For example, consider the stabilization of lateral eye movements. If the desired eye movement signal gain is available, the cortical motor execution signal could be easily mapped onto the desired gain using a pattern associator. However, in reality, the computation of this gain is too high-dimensional and complex, and it is more practical to compute the error signal. It has been observed that the cerebellum compares the desired proprioceptive feedback with the actual feedback and computes an error signal in the inferior olives. The inferior olives climbing fibres were found to synapse onto the parallel fibre synapses to the Purkinje cells. The parallel fibre carries the raw motor signal, and the climbing fibre signal was hypothesized to act as the error factor in the error-correction learning. Specifically, when both the input neurons (climbing fibres originating from granule cells) and the error signal were active, the synaptic weight decreases through long-term depression. This reduction should lead to better performance and reduced error, approximating the error-correction learning rule.

An alternative to the aforementioned one-layer error correction learning rule is backpropagation. Backpropagation is widely used in the engineering of deep neural networks. It works by explicitly computing the gradient of interneurons to directly allow for error-correction in a multilayer architecture. The gradient simply means how the weights should be adjusted to minimize the error. In practice, the gradient of an interneuron could be expressed as the sum of the gradients of all neurons in the next layer, scaled by their respective weight to the current interneuron. This information is not available to the synaptic cleft, and there has been no observation in which such projected gradients were computed explicitly in biological systems. Backpropagation as a learning rule, by itself, could not be considered biologically plausible.

However, similar to one-layer error-correction networks, backpropagation offers a higher-level description of the behaviour of multi-layer networks. For example, suppose a pattern associator with a competitive pre-processing layer is used to solve the XOR problem. During the learning process, the weights of this network are adjusted in a way that improves the network’s performance (matching the desired output). This process can be approximated by a backpropagation network. A multi-layer backpropagation network also adjusts the weights in a way that improves the task performance. It is also likely that the interneuron of this deep neural network could become tuned to a specific input pattern conjunction, resembling the pattern separation of competitive networks. In fact, the perceptron convergence theorem has proposed that a deep neural network with a non-linear activation function should be able to approximate any functions. Therefore, even though backpropagation as a learning rule is not biologically plausible, it is still useful as a tool to approximate biologically plausible learning rules in cases where such rules are unknown.

In conclusion, one-layer error correction networks are biologically possible. However, they are more complex than Hebbian networks or competitive networks. In biological neural systems, they are found either in cases where Hebbian learning is impractical, or when the error signal is readily available. The cerebellum is one such case. The computation of the gain is too complex as it deals with high-dimensional information outside of the neural system, and the error signal is readily available through the proprioceptive system. On the other hand, the backpropagation learning rule utilizes information not available to a synaptic cleft, and there is no support that a similar architecture exists in biological neural systems. However, it could act as a higher-level description of biological neural systems, and simulate the functions of more biologically plausible network architectures.


献花

求抱

捏脸

开心

伤心

无语

天哪

鄙视

评论 (0 个评论)

facelist

您需要登录后才可以评论 登录 | 注册

手机版|小黑屋|湿热一瞬间

GMT+8, 2024-4-28 06:32 , Processed in 0.030706 second(s), 8 queries , Redis On.

Powered by Discuz! X3.5

© 2001-2024 Discuz! Team.

返回顶部