Skip to content
Discussion options

You must be logged in to vote

In this dict you should specify parameter names and axes of corresponding tensors which you want to make continuous for further pruning. Thus, if you want to prune convolution layer4.0.conv1 along output dimension you should add {'layer4.0.conv1.weight': [0]} to the continuous_dims dict. If input channels dimension of the this convolution layer is also pruned, then {'layer4.0.conv1.weight': [0, 1]} is used.
Note that bias parameter of convolution layer must have the same sampling grid as weight parameter along out channels, but it is not necessary to list biases in continuous_dims, because torch_integral.graph module automatically detects such related tensors.

If the question is why do we…

Replies: 1 comment

Comment options

You must be logged in to vote
0 replies
Answer selected by b1n0
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
None yet
2 participants