Skip to yearly menu bar Skip to main content


Poster

On the Impacts of the Random Initialization in the Neural Tangent Kernel Theory

Guhan Chen · Yicheng Li · Qian Lin

[ ]
Wed 11 Dec 11 a.m. PST — 2 p.m. PST

Abstract:

This paper aims to discuss the impact of random initialization of neural networks in the neural tangent kernel (NTK) theory, which is ignored by most recent works in the NTK theory. It is well known that as the network's width tends to infinity, the neural network with random initialization converges to a Gaussian process (f^{\mathrm{GP}}), which takes values in (L^{2}(\mathcal{X})), where (\mathcal{X}) is the domain of the data. In contrast, to adopt the traditional theory of kernel regression, most recent works introduced a special mirrored architecture and a mirrored (random) initialization to ensure the network's output is identically zero at initialization. Therefore, it remains a question whether the conventional setting and mirrored initialization would make wide neural networks exhibit different generalization capabilities. In this paper, we first show that the training dynamics of the gradient flow of neural networks with random initialization converge uniformly to that of the corresponding NTK regression with random initialization (f^{\mathrm{GP}}). We then show that (\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 1) for any (s < \frac{3}{d+1}) and (\mathbf{P}(f^{\mathrm{GP}} \in [\mathcal{H}^{\mathrm{NT}}]^{s}) = 0) for any (s \geq \frac{3}{d+1}), where ([\mathcal{H}^{\mathrm{NT}}]^{s}) is the real interpolation space of the RKHS (\mathcal{H}^{\mathrm{NT}}) associated with the NTK. Consequently, the generalization error of the wide neural network trained by gradient descent is (\Omega(n^{-\frac{3}{d+3}})), and it still suffers from the curse of dimensionality. Thus, the NTK theory may not explain the superior performance of neural networks.

Live content is unavailable. Log in and register to view live content