未验证 提交 91212104 编写于 作者: Z zyfncg 提交者: GitHub

Fix bug of data transform in inference executor (#41349)

* fix bug of data transform in inference executor

* fix bug
上级 fac7fd42
......@@ -2176,6 +2176,16 @@ Scope* OperatorWithKernel::PreparePhiData(
if (!new_scope) {
new_scope = &scope.NewScope();
}
// For inference, if a gpu model has an op which could only run on CPU,
// each result of different input will be the same with the first one.
// The reason is that if a gpu tensor is the input of a cpu kernel,
// we will create a new cpu tensor in new scope.
// However, if enable_cache_runtime_context_, we get the cpu tensor each
// time, not the gpu tensor. Thus, we set pre_scope_ = nullptr
// to trigger `new RuntimeContext()` in RunImpl().
if (enable_cache_runtime_context_) {
pre_scope_ = nullptr;
}
// Create new var with the same name in transfer scopes
auto* trans_var = new_scope->Var(name_vec[offset]);
......
......@@ -64,7 +64,7 @@ void ArangeKernel(const Context& dev_ctx,
PD_REGISTER_KERNEL(
arange, GPU, ALL_LAYOUT, phi::ArangeKernel, float, double, int64_t, int) {
kernel->InputAt(0).SetBackend(phi::Backend::CPU);
kernel->InputAt(1).SetBackend(phi::Backend::CPU);
kernel->InputAt(2).SetBackend(phi::Backend::CPU);
kernel->InputAt(0).SetBackend(phi::Backend::ALL_BACKEND);
kernel->InputAt(1).SetBackend(phi::Backend::ALL_BACKEND);
kernel->InputAt(2).SetBackend(phi::Backend::ALL_BACKEND);
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册