Don’t go with the Flow – A new tensor algebra for Neural Networks
With Lior Horesh (IBM, Columbia University)
Don’t go with the Flow – A new tensor algebra for Neural Networks
Multi-dimensional information often involves multi-dimensional correlations that may remain latent by virtue of traditional matrix-based learning algorithms. In this study, we propose a tensor neural network framework that offers an exciting new paradigm for supervised machine learning. The tensor neural network structure is based upon the t-product (Kilmer and Martin, 2011), an algebraic formulation to multiply tensors via circulant convolution which inherits mimetic matrix properties.
We demonstrate that our tensor neural network architecture is a natural high-dimensional extension to conventional neural networks. Then, we expand upon (Haber and Ruthotto, 2017) interpretation of deep neural networks as discretizations of nonlinear differential equations, to construct intrinsically stable tensor neural network architectures. We illustrate the advantages of stability and demonstrate the potential of tensor neural networks with numerical experiments on the MNIST dataset.
- Speaker: Lior Horesh (IBM, Columbia University)
- Thursday 10 May 2018, 15:00–16:00
- Venue: MR14, Centre for Mathematical Sciences.
- Series: Applied and Computational Analysis; organiser: Carola-Bibiane Schoenlieb.