Аннотация:
Pruning connections in a fully connected neural network allows to remove redundancy in the structure of the
neural network and thus reduce the computational complexity of its implementation while maintaining the resulting characteristics of the classification of images entering
its input. However, the issues of choosing the parameters
of the pruning procedure have not been sufficiently studied
at the moment. The choice essentially depends on the configuration of the neural network. However, in any neural
network configuration there is one or more multilayer perceptrons. For them, it is possible to develop universal recommendations for choosing the parameters of the pruning
procedure. One of the most promising methods for practical
implementation is considered – the iterative pruning method, which uses preprocessing of input signals to regularize the learning process of a neural network. For a specific configuration of a multilayer perceptron and the
MNIST (Modified National Institute of Standards and
Technology) dataset, a database of handwritten digit samples proposed by the US National Institute of Standards and
Technology as a standard when comparing image recognition methods, dependences of the classification accuracy
of handwritten digits and learning rate were obtained on
the learning step, pruning interval, and the number of links
removed at each pruning iteration. It is shown that the best
set of parameters of the learning procedure with pruning
provides an increase in the quality of classification by about
1 %, compared with the worst set in the studied range. The
convex nature of these dependencies allows a constructive
approach to finding a neural network configuration that
provides the highest classification accuracy with the minimum amount of computational costs during implementation