Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
MindSpore
akg
提交
11ed37cc
A
akg
项目概览
MindSpore
/
akg
通知
58
Star
7
Fork
7
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
A
akg
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
11ed37cc
编写于
8月 03, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
8月 03, 2020
浏览文件
操作
浏览文件
下载
差异文件
!91 rm untested operator mean
Merge pull request !91 from lingyunli63/rm_mean
上级
949a4553
ff719bda
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
0 addition
and
117 deletion
+0
-117
python/akg/ms/gpu/__init__.py
python/akg/ms/gpu/__init__.py
+0
-1
python/akg/ms/gpu/mean.py
python/akg/ms/gpu/mean.py
+0
-69
python/akg/ops/math_gpu/mean.py
python/akg/ops/math_gpu/mean.py
+0
-47
未找到文件。
python/akg/ms/gpu/__init__.py
浏览文件 @
11ed37cc
...
...
@@ -26,7 +26,6 @@ from .logical_or import LogicalOr
from
.relu6_grad
import
ReLU6Grad
from
.squeeze
import
Squeeze
from
.squeeze_grad
import
SqueezeGrad
,
gpu_schedule_SqueezeGrad
from
.mean
import
SimpleMean
from
.sub
import
Sub
from
.mul
import
Mul
from
.hsigmoid
import
HSigmoid
...
...
python/akg/ms/gpu/mean.py
已删除
100644 → 0
浏览文件 @
949a4553
#!/usr/bin/env python3
# coding: utf-8
# Copyright 2020 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""mean op compute and schedule"""
from
.default_schedule
import
DEFAULT_GPU_THREAD
from
akg.ops.math_gpu.sum_value
import
sum_value
import
akg
from
akg.ops.math_gpu.mean
import
mean
def
gpu_schedule_Mean
(
outs
):
"""
gpu schedule function for mean.
Args:
outs (tvm.tensor.Tensor): outputs of compute.
Returns:
sch (schedule.Schedule): The created schedule.
"""
out
=
outs
[
0
]
if
isinstance
(
outs
,
list
)
else
outs
sch
=
tvm
.
create_schedule
(
out
.
op
)
if
out
.
op
.
name
==
"T_divide"
:
tensor_c
=
out
else
:
# squeeze
tensor_c
=
out
.
op
.
input_tensors
[
0
]
tensor_b
=
tensor_c
.
op
.
input_tensors
[
0
]
if
len
(
tensor_c
.
op
.
axis
)
>=
2
:
sch
[
tensor_b
].
compute_at
(
sch
[
tensor_c
],
tensor_c
.
op
.
axis
[
1
])
else
:
sch
[
tensor_b
].
compute_at
(
sch
[
tensor_c
],
tensor_c
.
op
.
axis
[
0
])
bx
,
tx
=
sch
[
tensor_c
].
split
(
tensor_c
.
op
.
axis
[
0
],
factor
=
DEFAULT_GPU_THREAD
)
sch
[
tensor_c
].
bind
(
bx
,
tvm
.
thread_axis
(
"blockIdx.x"
))
sch
[
tensor_c
].
bind
(
tx
,
tvm
.
thread_axis
(
"threadIdx.x"
))
return
sch
@
akg
.
schedule
(
gpu_schedule_Mean
)
def
Mean
(
data
,
axis
=
None
,
keepdims
=
False
):
return
mean
(
data
,
axis
,
keepdims
)
@
akg
.
schedule
(
gpu_schedule_Mean
)
def
SimpleMean
(
x
):
"""
SimpleMean compute the mean of the input 4D Tensor over last two axises and keep reduced dimensions.
Args:
x (tvm.tensor.Tensor): Tensor of type float16, float32.
Returns:
tvm.tensor.Tensor, has the same type as x, output shape will be (a, b, 1, 1) if input Tensor x is (a, b, c, d).
"""
axis
=
(
2
,
3
)
keepdims
=
True
return
mean
(
x
,
axis
,
keepdims
)
python/akg/ops/math_gpu/mean.py
已删除
100644 → 0
浏览文件 @
949a4553
# Copyright 2019 Huawei Technologies Co., Ltd
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""operator dsl function: mean"""
import
akg.topi
import
akg.tvm
from
akg.utils
import
format_transform
as
ft_util
from
akg.utils
import
validation_check
as
vc_util
from
akg.ops.math_gpu
import
sum_value
@
vc_util
.
check_input_type
(
akg
.
tvm
.
tensor
.
Tensor
,
(
list
,
tuple
,
int
,
type
(
None
)),
(
bool
,
type
(
None
)))
def
mean
(
data
,
axis
=
None
,
keepdims
=
False
):
"""
Computes the mean of the values of a Tensor over the whole dataset.
Args:
data (tvm.tensor.Tensor): Tensor.
axis (Union[list, tuple, int, None]): If the tuple is empty, the axis equal to None.
keepdims (bool): If keepdims equal to True, the result shape length is same to input shape length.
Returns:
tvm.tensor.Tensor, has the same type as data. If keepdims equal to True, all reduced dimensions are
retained with length 1. else these reduced axis will be eliminate.
"""
shape
=
[
x
.
value
for
x
in
data
.
shape
]
vc_util
.
reduce_axis_check
(
shape
,
axis
)
axis
=
ft_util
.
refine_reduce_axis
(
data
,
axis
)
count
=
1
for
i
in
axis
:
count
*=
shape
[
i
]
output
,
_
=
sum_value
.
sum_value
(
data
,
axis
,
keepdims
)
res
=
akg
.
topi
.
divide
(
output
,
count
)
return
res
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录