Pruning parameters of supported layers in :attr:`main_program` via
specified mask generation function given by :attr:`func_name`. This
specified mask generation function given by :attr:`mask_algo`. This
function supports both training and inference controlled by :attr:`with_mask`.
If :attr:`with_mask` is True, it would also prune parameter related ASP mask Variables,
else only prunes parameters.
...
...
@@ -114,11 +167,11 @@ def prune_model(place,
inference only. To obtain OptimizerWithSparsityGuarantee, please see `sparsity.decoreate()`.
Args:
place (fluid.CPUPlace()|fluid.CUDAPlace(N)): Device place for pruned parameter and mask Variables, and N means the GPU's id. It should be the same as created instance of Executor.
main_program (Program, optional): Program with model definition and its parameters. Default is `paddle.static.default_main_program()
n (int): n of `n:m` sparse pattern.
m (int): m of `n:m` sparse pattern.
func_name (MaskAlgo, optional): The function name to generate spase mask. Default is `MaskAlgo.MASK_1D`. All options please refer to `MaskAlgo`.
mask_algo (string, optional): The function name to generate spase mask. Default is `mask_1d`.
The vaild inputs should be one of 'mask_1d', 'mask_2d_greedy' and 'mask_2d_best'.
with_mask (bool, optional): To prune mask Variables related to parameters or not. Ture is purning also, False is not. Defalut is True.
Returns:
dictionary: A dictionary with key: `parameter name` (string) and value: its corresponding mask Variable.
...
...
@@ -126,50 +179,58 @@ def prune_model(place,
.. code-block:: python
import paddle
import paddle.fluid as fluid
import paddle.fluid.core as core
from paddle.fluid.contrib import sparsity
from paddle.static import sparsity
paddle.enable_static()
main_program = fluid.Program()
startup_program = fluid.Program()
main_program = paddle.static.Program()
startup_program = paddle.static.Program()
place = paddle.CPUPlace()
if core.is_compiled_with_cuda():
place = paddle.CUDAPlace(0)
with fluid.program_guard(main_program, startup_program):