Efficient Deep Neural Networks

Posted by Christopher Mertin on May 05, 2017 in Project • 2 min read

This was the project that was required to complete my Masters as the University of Utah. My project explored the use of low rank matrix approximations and hierarchical matrices in order to reduce the number of parameters in deep neural networks. The idea behind this was to lower the comptuation time and also increase the learning rate as well.

This had very nice results which can be seen in the figure below, and the report can be found here.

HNN

The above figure shows the learning rate on the data for a Hierarchican Matrix Neural Network (HNN) and a typical Neural Network (NN), where both were trainined and tested on the exact same dataset 5 times. As the above shows, the HNN learned much quicker (with the number of iterations) with the computational aspect also being reduced from \(\mathcal{O}(N^2)\) to \(\mathcal{O}(N)\), so the runtime was quicker as well.