The paper deals with methods for estimating sparse parameterization of neural networks, which can be used to prune overparameterized neural networks and reduce their complexity in an attempt to reveal only relevant parameters, thus increasing the overall interpretability of the model. In this paper, the classical and the variational methods, which allow these parameterizations to be estimated, are described. This is achieved by reviewing special prior distributions, here referred to as shrinkage priors, which allow us to incorporate our preferences about sparse parameterizations into the model. Variational methods then help us to approximate the posterior distribution for model parameters. Using this posterior distribution, it is possible to better quantify the uncertainty of the parameters. Finally, the methods are applied to various models, including linear and logistic regression, neural networks, and are also utilized in the concept of multi-instance learning. The experiments are carried out on both synthetic and real data.