提交 aa0d2350 编写于 作者: A Alexander Smorkalov

Merge pull request #989 from zihaomu:qgemm_and_squeeze_opset13_onnximporter

### OpenCV: Open Source Computer Vision Library
This repository contains extra data for the OpenCV library.
#### Resources
* Homepage: http://opencv.org
* Docs: http://docs.opencv.org
* Q&A forum: https://forum.opencv.org
* previous forum (read only): http://answers.opencv.org
* Issue tracking: https://github.com/opencv/opencv/issues
#### Contributing
Please read before starting work on a pull request: https://github.com/opencv/opencv/wiki/How_to_contribute
Summary of guidelines:
* One pull request per issue;
* Choose the right base branch;
* Include tests and documentation;
* Clean up "oops" commits before submitting;
* Follow the coding style guide.
......@@ -774,6 +774,7 @@ input = Variable(torch.randn(3, 1, 2, 4))
model = Squeeze()
model.eval()
save_data_and_model("squeeze", input, model)
save_data_and_model("squeeze_axes_op13", input, model, version=13)
class Div(nn.Module):
......
......@@ -269,4 +269,13 @@ model = nn.Sequential(
nn.Linear(84, 10)
)
input = Variable(torch.randn(1, 3, 32, 32))
quantize_and_save_model("quantized_constant", input, model, wt_type="int8", per_channel=True)
\ No newline at end of file
quantize_and_save_model("quantized_constant", input, model, wt_type="int8", per_channel=True)
class Gemm(nn.Module):
def forward(self, x):
mat1 =torch.ones(3, 3)
return torch.mm(x, mat1)
input = Variable(torch.randn(1, 3))
model = Gemm()
quantize_and_save_model("quantized_gemm", input, model, act_type="int8", wt_type="int8", per_channel=False)
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册