Skip to content

  • 体验新版
    • 正在加载...
  • 登录
  • PaddlePaddle
  • X2Paddle
  • Issue
  • #252

X
X2Paddle
  • 项目概览

PaddlePaddle / X2Paddle
大约 2 年 前同步成功

通知 329
Star 698
Fork 167
  • 代码
    • 文件
    • 提交
    • 分支
    • Tags
    • 贡献者
    • 分支图
    • Diff
  • Issue 26
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
  • Wiki 0
    • Wiki
  • 分析
    • 仓库
    • DevOps
  • 项目成员
  • Pages
X
X2Paddle
  • 项目概览
    • 项目概览
    • 详情
    • 发布
  • 仓库
    • 仓库
    • 文件
    • 提交
    • 分支
    • 标签
    • 贡献者
    • 分支图
    • 比较
  • Issue 26
    • Issue 26
    • 列表
    • 看板
    • 标记
    • 里程碑
  • 合并请求 4
    • 合并请求 4
  • Pages
  • 分析
    • 分析
    • 仓库分析
    • DevOps
  • Wiki 0
    • Wiki
  • 成员
    • 成员
  • 收起侧边栏
  • 动态
  • 分支图
  • 创建新Issue
  • 提交
  • Issue看板
已关闭
开放中
Opened 5月 07, 2020 by saxon_zh@saxon_zhGuest

onnx转paddle中遇到问题

Created by: CoderAnn

我在onnx转paddle时候遇到了一个问题想请教一下。最开始torch版本是1.1.0转onnx时会报错,之后torch修改为1.0.1后pytorch转onnx没问题了,但是在onnx转paddle时遇到了如下问题,请问该如何解决呢?麻烦了!感谢!(ps:我的网络结构中用到了上采样层) Now translating model from onnx to paddle. model ir_version: 3, op version: 9 [libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 1585340438 [libprotobuf WARNING google/protobuf/io/coded_stream.cc:537] Reading dangerously large protocol message. If the message turns out to be larger than 2147483647 bytes, parsing will be halted for security reasons. To increase the limit (or to disable these warnings), see CodedInputStream::SetTotalBytesLimit() in google/protobuf/io/coded_stream.h. [libprotobuf WARNING google/protobuf/io/coded_stream.cc:78] The total number of bytes read was 1585340438 (op_type:Tanh, name:): Inferred elem type differs from existing elem type: (INT64) vs (FLOAT) Traceback (most recent call last): File "/home/vis/hongzhibin/env/anaconda3/bin/x2paddle", line 11, in <module> load_entry_point('x2paddle==0.7.1', 'console_scripts', 'x2paddle')() File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/convert.py", line 248, in main onnx2paddle(args.model, args.save_dir, params_merge) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/convert.py", line 172, in onnx2paddle model = ONNXDecoder(model_path) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/decoder/onnx_decoder.py", line 325, in __init__ self.check_model_running_state(onnx_model) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/decoder/onnx_decoder.py", line 481, in check_model_running_state model = onnx.shape_inference.infer_shapes(model) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/onnx/shape_inference.py", line 35, in infer_shapes inferred_model_str = C.infer_shapes(model_str) RuntimeError: Inferred elem type differs from existing elem type: (INT64) vs (FLOAT) 另外还尝试过torch版本为1.2.0时转onnx,在onnx转paddle时也遇到问题。无论哪种版本能从onnx转到padlle就可以了。 `Traceback (most recent call last): File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/decoder/onnx_decoder.py", line 494, in check_model_running_state sess = rt.InferenceSession(model_path) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 23, in init self._load_model() File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/onnxruntime/capi/session.py", line 35, in _load_model self._sess.load_model(self._path_or_bytes, providers) onnxruntime.capi.onnxruntime_pybind11_state.RuntimeException: [ONNXRuntimeError] : 6 : RUNTIME_EXCEPTION : Exception during initialization: /onnxruntime_src/onnxruntime/core/providers/cpu/tensor/upsample.h:209 void onnxruntime::UpsampleBase::ScalesValidation(const std::vector&, onnxruntime::UpsampleMode) const scale >= 1 was false. Scale value should be greater than or equal to 1. Stacktrace:

During handling of the above exception, another exception occurred:

Traceback (most recent call last): File "/home/vis/hongzhibin/env/anaconda3/bin/x2paddle", line 11, in load_entry_point('x2paddle==0.7.1', 'console_scripts', 'x2paddle')() File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/convert.py", line 248, in main onnx2paddle(args.model, args.save_dir, params_merge) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/convert.py", line 172, in onnx2paddle model = ONNXDecoder(model_path) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/decoder/onnx_decoder.py", line 325, in init self.check_model_running_state(onnx_model) File "/home/vis/hongzhibin/env/anaconda3/lib/python3.6/site-packages/x2paddle-0.7.1-py3.6.egg/x2paddle/decoder/onnx_decoder.py", line 503, in check_model_running_state "onnxruntime inference onnx model failed, Please confirm the correctness of onnx model by onnxruntime, if onnx model is correct, please submit issue in github." Exception: onnxruntime inference onnx model failed, Please confirm the correctness of onnx model by onnxruntime, if onnx model is correct, please submit issue in github.`

指派人
分配到
无
里程碑
无
分配里程碑
工时统计
无
截止日期
无
标识: paddlepaddle/X2Paddle#252
渝ICP备2023009037号

京公网安备11010502055752号

网络110报警服务 Powered by GitLab CE v13.7
开源知识
Git 入门 Pro Git 电子书 在线学 Git
Markdown 基础入门 IT 技术知识开源图谱
帮助
使用手册 反馈建议 博客
《GitCode 隐私声明》 《GitCode 服务条款》 关于GitCode
Powered by GitLab CE v13.7