Fixed Point Networks: Implicit depth models with Jacobian-free backprop.

Published in (under review), 2021

Joint with Samy Wu Fung, Howard Heaton, Qiuwei Li, Stanley Osher and Wotao Yin.

A growing trend in deep learning replaces fixed depth models by approximations of the limit as network depth approaches infinity. This approach uses a portion of network weights to prescribe behavior by defining a limit condition. This makes network depth implicit, varying based on the provided data and an error tolerance. We propose a new implicit depth architecture, Fixed Point Networks (FPN), and a new Jacobian-free backpropagation (JFB) scheme. JFB makes FPNs much faster and easier to train.

Arxiv version