Background: The Importance of Trainability in Neural Pruning Neural network pruning, a method to enhance computational efficiency, has gained significant traction recently. The primary goal of pruning is to eliminate redundant parameters from a neural network without considerably degrading its performance. This process typically includes three phases: pre-training a dense model, pruning unnecessary connections to form a sparse model, and retraining the sparse model to regain performance. Two primary categories of pruning exist: unstructured pruning and structured pruning. The latter, structured pruning, is more aligned with modern network needs like ResNets, aiming for faster rather than smaller networks. A notable phenomenon in neural network pruning is the crucial role of trainability. Unattended broken trainability, resulting from the pruning process, can lead to under-performance and affect the retraining learning rate, potentially causing biased results. The essence of trainability lies in its connection with the network's ability to learn effectively post-pruning. Method: Introducing Trainability Preserving Pruning (TPP) The innovation of Trainability Preserving Pruning (TPP) marks a significant advancement in this field. TPP is a novel filter pruning algorithm designed to maintain trainability through a regulated training process. The method focuses on decoupling the pruned (unimportant) filters from the retained (important) filters, effectively minimizing the dependencies that typically impede trainability post-pruning. TPP leverages two main strategies: Regularizing the weight gram matrix to encourage zero correlation between pruned and kept filters. This approach avoids over-penalizing important weights, which could otherwise lead to optimization issues and suboptimal training. Incorporating a Batch Normalization (BN) regularizer. Given that BN parameters are part of the trainable network, their…