The Lottery Ticket Hypothesis: Finding Sparse, Trainable Neural Networks
View PDF
Abstract:Neural network pruning techniques can reduce the parameter counts of trained networks by over 90%, decreasing storage requirements and improving computational performance of inference without compromising accuracy. However, contemporary experience is that the sparse architectures produced by pruning are difficult to train from the start, which would similarly improve training performance.
We find that a standard pruning technique naturally uncovers subnetworks whose initializat...
Read more at arxiv.org