The academic translation is as follows:

If the number of samples for pseudo-gradient is fixed, it will no longer guarantee the progressive reduction of uncertainty, and thus cannot provide algorithmic convergence in the expected sense. The following inference shows that when algorithm 1 uses a constant sampling capacity, it can only achieve linear convergence to the neighborhood of the optimal solution, namely 'F Nash equilibrium'. It can be seen that the boundary D depends on network structure, batch size, step size, and problem parameters η, L, ν.

学术翻译:固定采样伪梯度数量导致算法线性收敛至最优解邻域

原文地址: https://www.cveoy.top/t/topic/cAWW 著作权归作者所有。请勿转载和采集!

免费AI点我,无需注册和登录