Towards Stable Learning in Predictive Coding Networks | OpenReview
Myoung Hoon Ha, Yoondo Sung, Youngha Jo, Hyunjun Kim, Sang Wan Lee
openreview.net
Predictive coding (PC) offers a biologically plausible model of cortical functions, encompassing processes such as learning, prediction, encoding, and memory. However, predictive coding networks (PCNs) face significant challenges in stability and scalability, which constrain our capacity to elucidate cortical computation. Our study identifies instability in PCNs as a fundamental issue, focusing on the exponential growth of latent state norms and prediction errors after inference. These dynamics lead to exploding and vanishing gradients in PCNs. Moreover, the concentration of prediction errors near the input and output layer impedes effective learning, exacerbating performance degradation as network depth increases. To address these limitations, we propose stabilizing techniques for PCNs, including length regularization and sequential training with skip connection modules. This approach counteracts the exponential growth of latent states and makes the distribution of prediction errors more uniform across layers. Empirical evaluations demonstrate that our approach enhances stability and generalization, enabling the training of deeper networks more efficiently. This study deepens our understanding of complex dynamics in cortical networks, thereby advancing the practical application of predictive coding theory to its full potential.