Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Crayon鑫
Paddle
提交
c42d662e
P
Paddle
项目概览
Crayon鑫
/
Paddle
与 Fork 源项目一致
Fork自
PaddlePaddle / Paddle
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
1
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
P
Paddle
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
1
Issue
1
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
未验证
提交
c42d662e
编写于
4年前
作者:
Y
yaoxuefeng
提交者:
GitHub
4年前
浏览文件
操作
浏览文件
下载
电子邮件补丁
差异文件
modify roll test=develop (#25321)
上级
bdc2c2db
develop
2.0.1-rocm-post
Ligoml-patch-1
OliverLPH-patch-1
OliverLPH-patch-2
PaddlePM-patch-1
PaddlePM-patch-2
ZHUI-patch-1
add_default_att
add_model_benchmark_ci
add_some_yaml_config
addfile
all_new_design_exec
ascendrc
ascendrelease
cherry_undefined_var
compile_windows
delete_2.0.1-rocm-post
delete_add_default_att
delete_all_new_design_exec
delete_ascendrc
delete_compile_windows
delete_delete_addfile
delete_disable_iterable_dataset_unittest
delete_fix_dataloader_memory_leak
delete_fix_imperative_dygraph_error
delete_fix_retry_ci
delete_fix_undefined_var
delete_improve_sccache
delete_paralleltest
delete_prv-disable-more-cache
delete_revert-31068-fix_conv3d_windows
delete_revert-31562-mean
delete_revert-33630-bug-fix
delete_revert-34159-add_npu_bce_logical_dev
delete_revert-34910-spinlocks_for_allocator
delete_revert-35069-revert-34910-spinlocks_for_allocator
delete_revert-36057-dev/read_flags_in_ut
dingjiaweiww-patch-1
disable_iterable_dataset_unittest
dy2static
enable_eager_model_test
final_state_gen_python_c
final_state_intermediate
fix-numpy-issue
fix_concat_slice
fix_dataloader_memory_leak
fix_imperative_dygraph_error
fix_npu_ci
fix_op_flops
fix_retry_ci
fix_rnn_docs
fix_tensor_type
fix_undefined_var
fixiscan
fixiscan1
fixiscan2
fixiscan3
github/fork/123malin/netifaces
github/fork/123malin/tdm_abacus
github/fork/AshburnLee/dev_unique
github/fork/ForFishes/fix_memory_matmul
github/fork/ForFishes/rm_fluid
github/fork/LielinJiang/move-2.0-api
github/fork/LielinJiang/visual-dl-cb
github/fork/LiuChiachi/add-transformer-generate-square-subsequent-mask-api
github/fork/LiuChiachi/fix-example-code-for-hapi-Model
github/fork/LiuChiachi/remove-input-requirment-in-dygraph-Model
github/fork/MrChengmo/fix_ps_profiler
github/fork/MrChengmo/update_ps_heter
github/fork/PWhiddy/patch-1
github/fork/Shixiaowei02/dev/save_load_upgrade
github/fork/TCChenlong/fix_hapi
github/fork/TCChenlong/fix_inden
github/fork/Thunderbrook/xpu_slice
github/fork/XieYunshen/disable_ut_test_parallel_executor_fetch_isolated_var
github/fork/XieYunshen/disable_ut_test_parallel_executor_fetch_isolated_var_3
github/fork/XieYunshen/timeout_20S_ut
github/fork/ZeyuChen/remove-nltk
github/fork/arlesniak/arlesniak/selective__mkldnn_flags
github/fork/baiyfbupt/code_doc_mig
github/fork/chalsliu/set_timeout
github/fork/chen-zhiyu/develop
github/fork/chenwhql/ci/try_to_find_test_buffer_shared_memory_reuse_pass_error
github/fork/chenwhql/dygraph/remove_scale_loss_and_apply_collective_grads
github/fork/chenwhql/saveload/add_get_inference_program
github/fork/chenwhql/saveload/remove_save_load_config
github/fork/cryoco/pass-compatibility-trt
github/fork/danleifeng/isempty_api2.0
github/fork/frankwhzhang/api_transfer
github/fork/hbwx24/error_msg/cuda_kernel_error_msg
github/fork/heavengate/cherry_yolo_box
github/fork/heavengate/update_yolo_box
github/fork/iclementine/rnn_fix
github/fork/iducn/testestse
github/fork/jczaja/prv-25537-fix
github/fork/jiweibo/api_2.0
github/fork/jiweibo/fix_lite_resnet50_test
github/fork/juncaipeng/fix_doc_1
github/fork/lfchener/sample_code
github/fork/littletomatodonkey/fix_reg_doc
github/fork/liym27/dy2stat_update_assign_to_rc20
github/fork/luotao1/profiler_ut
github/fork/mapingshuo/add_wait
github/fork/mapingshuo/doc_2.0
github/fork/mapingshuo/zero-0.5
github/fork/miraiwk/dev
github/fork/pangyoki/add-Categorical-class-branch
github/fork/pangyoki/add-multinomial-op-branch
github/fork/pangyoki/fix-test_distritbution-CI
github/fork/qjing666/doublegrad
github/fork/qjing666/fix_hdfs_download
github/fork/sandyhouse/add_gather_etc
github/fork/sandyhouse/add_send_recv_alltoall_etc
github/fork/sandyhouse/pipeline_exe_run
github/fork/seiriosPlus/feature/large_scale_kv_save_delta
github/fork/seiriosPlus/fix/paddle_errors_fix
github/fork/seiriosPlus/fix/paddle_op_errors
github/fork/shangzhizhou/fix_test_activation_op_random_bug
github/fork/smallv0221/yxp0924
github/fork/smallv0221/yxp0925
github/fork/swtkiwi/del-matplotlib
github/fork/tianshuo78520a/kunlun_test
github/fork/tianshuo78520a/update_dockerfile
github/fork/wanghaoshuang/bert_fuse
github/fork/wanghaoshuang/label_smooth
github/fork/wanghuancoder/develop_CUDASynchronize
github/fork/wanghuancoder/develop_Layer_doc
github/fork/wanghuancoder/develop_ParameterList_doc
github/fork/wanghuancoder/develop_Sequential_doc
github/fork/wanghuancoder/develop_bilinear_tensor_product
github/fork/wanghuancoder/develop_coverage_build_sh
github/fork/wanghuancoder/develop_in_dynamic_mode_doc
github/fork/wanghuancoder/develop_unique_name_doc
github/fork/wangxicoding/fleet_meta_combine
github/fork/wawltor/error_message_fix_5
github/fork/willthefrog/remove_l2_norm
github/fork/windstamp/momentum_op
github/fork/windstamp/mv_op_5
github/fork/windstamp/normal_api
github/fork/wojtuss/wojtuss/fusion_gru_quantization
github/fork/wojtuss/wojtuss/quantization-with-shift
github/fork/wzzju/fix_err_info
github/fork/wzzju/pure_fp16
github/fork/xiemoyuan/op_error_message
github/fork/xiemoyuan/optimize_error_message
github/fork/yaoxuefeng6/fix_doc
github/fork/yaoxuefeng6/mod_dataset_v2
github/fork/yongqiangma/lod
github/fork/ysh329/fix-clip-by-norm-error
github/fork/ysh329/fix-error-clip-by-value
github/fork/yukavio/error_info
github/fork/zhangting2020/conv_filter_grad
github/fork/zhangting2020/is_compile_with_cuda
github/fork/zhangting2020/place_doc
github/fork/zhangting2020/program
github/fork/zhhsplendid/fix_any
github/fork/zhhsplendid/refine_api2
github/fork/zhhsplendid/refine_api2_test
github/fork/zhhsplendid/refine_api_test_ptb_lm
github/fork/zhhsplendid/refine_api_test_resnet
github/fork/zhhsplendid/refine_api_test_simnet
github/fork/zhiqiu/dev/refine_initializer
github/fork/zhiqiu/dev/remove_inplace_argument
github/fork/zlsh80826/nvinfer_plugin_var_len_cuda11
improve_sccache
incubate/infrt
inplace_addto
make_flag_adding_easier
move_embedding_to_phi
move_histogram_to_pten
move_sgd_to_phi
move_slice_to_pten
move_temporal_shift_to_phi
move_yolo_box_to_phi
npu_fix_alloc
numel
paralleltest
preln_ernie
prv-disable-more-cache
prv-md-even-more
prv-onednn-2.5
pten_tensor_refactor
release/2.0
release/2.0-beta
release/2.0-rc
release/2.0-rc1
release/2.1
release/2.2
release/2.3
release/2.3-fc-ernie-fix
release/2.4
revert-26856-strategy_example2
revert-27520-disable_pr
revert-31068-fix_conv3d_windows
revert-31562-mean
revert-32290-develop-hardlabel
revert-33037-forci
revert-33475-fix_cifar_label_dimension
revert-33630-bug-fix
revert-34159-add_npu_bce_logical_dev
revert-34406-add_copy_from_tensor
revert-34910-spinlocks_for_allocator
revert-35069-revert-34910-spinlocks_for_allocator
revert-36057-dev/read_flags_in_ut
revert-36201-refine_fast_threaded_ssa_graph_executor
revert-36985-add_license
revert-37318-refactor_dygraph_to_eager
revert-37926-eager_coreops_500
revert-37956-revert-37727-pylayer_support_tuple
revert-38100-mingdong
revert-38301-allocation_rearrange_pr
revert-38703-numpy_bf16_package_reupload
revert-38732-remove_useless_header_in_elementwise_mul_grad
revert-38959-Reduce_Grad
revert-39143-adjust_empty
revert-39227-move_trace_op_to_pten
revert-39268-dev/remove_concat_fluid_kernel
revert-40170-support_partial_grad
revert-41056-revert-40727-move_some_activaion_to_phi
revert-41065-revert-40993-mv_ele_floordiv_pow
revert-41068-revert-40790-phi_new
revert-41944-smaller_inference_api_test
revert-42149-do-not-reset-default-stream-for-stream-safe-cuda-allocator
revert-43155-fix_ut_tempfile
revert-43882-revert-41944-smaller_inference_api_test
revert-45808-phi/simplify_size_op
revert-46827-deform_comment
rocm_dev_0217
support_weight_transpose
test_benchmark_ci
test_feature_precision_test_c
test_model_benchmark
test_model_benchmark_ci
zhiqiu-patch-1
v2.4.0-rc0
v2.3.2
v2.3.1
v2.3.0
v2.3.0-rc0
v2.2.2
v2.2.1
v2.2.0
v2.2.0-rc0
v2.2.0-bak0
v2.1.3
v2.1.2
v2.1.1
v2.1.0
v2.1.0-rc0
v2.0.2
v2.0.1
v2.0.0
v2.0.0-rc1
v2.0.0-rc0
v2.0.0-beta0
无相关合并请求
变更
4
隐藏空白更改
内联
并排
Showing
4 changed file
with
70 addition
and
47 deletion
+70
-47
paddle/fluid/operators/roll_op.cc
paddle/fluid/operators/roll_op.cc
+2
-2
paddle/fluid/operators/roll_op.h
paddle/fluid/operators/roll_op.h
+4
-4
python/paddle/fluid/tests/unittests/test_roll_op.py
python/paddle/fluid/tests/unittests/test_roll_op.py
+20
-6
python/paddle/tensor/manipulation.py
python/paddle/tensor/manipulation.py
+44
-35
未找到文件。
paddle/fluid/operators/roll_op.cc
浏览文件 @
c42d662e
...
...
@@ -33,7 +33,7 @@ class RollOp : public framework::OperatorWithKernel {
platform
::
errors
::
InvalidArgument
(
"Output(Out) of RollOp should not be null."
));
auto
dims
=
ctx
->
Attrs
().
Get
<
std
::
vector
<
int64_t
>>
(
"
dim
s"
);
auto
dims
=
ctx
->
Attrs
().
Get
<
std
::
vector
<
int64_t
>>
(
"
axi
s"
);
auto
shifts
=
ctx
->
Attrs
().
Get
<
std
::
vector
<
int64_t
>>
(
"shifts"
);
PADDLE_ENFORCE_EQ
(
dims
.
size
(),
shifts
.
size
(),
...
...
@@ -92,7 +92,7 @@ class RollOpMaker : public framework::OpProtoAndCheckerMaker {
"of the tensor are shifted."
)
.
SetDefault
({});
AddAttr
<
std
::
vector
<
int64_t
>>
(
"
dim
s"
,
"
axi
s"
,
"Axis along which to roll. It must have the same size "
"with shifts."
)
.
SetDefault
({});
...
...
This diff is collapsed.
Click to expand it.
paddle/fluid/operators/roll_op.h
浏览文件 @
c42d662e
...
...
@@ -82,7 +82,7 @@ class RollKernel : public framework::OpKernel<T> {
auto
&
input
=
input_var
->
Get
<
LoDTensor
>
();
auto
*
output
=
output_var
->
GetMutable
<
LoDTensor
>
();
std
::
vector
<
int64_t
>
shifts
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"shifts"
);
std
::
vector
<
int64_t
>
dims
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"
dim
s"
);
std
::
vector
<
int64_t
>
dims
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"
axi
s"
);
std
::
vector
<
T
>
out_vec
;
TensorToVector
(
input
,
context
.
device_context
(),
&
out_vec
);
...
...
@@ -94,8 +94,8 @@ class RollKernel : public framework::OpKernel<T> {
PADDLE_ENFORCE_EQ
(
dims
[
i
]
<
input_dim
.
size
()
&&
dims
[
i
]
>=
(
0
-
input_dim
.
size
()),
true
,
platform
::
errors
::
OutOfRange
(
"Attr(
dim
s[%d]) is out of range, It's expected "
"to be in range of [-%d, %d]. But received Attr(
dim
s[%d]) = %d."
,
"Attr(
axi
s[%d]) is out of range, It's expected "
"to be in range of [-%d, %d]. But received Attr(
axi
s[%d]) = %d."
,
i
,
input_dim
.
size
(),
input_dim
.
size
()
-
1
,
i
,
dims
[
i
]));
shift_along_dim
(
out_vec
.
data
(),
input_dim
,
dims
[
i
],
shifts
[
i
]);
}
...
...
@@ -114,7 +114,7 @@ class RollGradKernel : public framework::OpKernel<T> {
auto
&
input
=
input_var
->
Get
<
LoDTensor
>
();
auto
*
output
=
output_var
->
GetMutable
<
LoDTensor
>
();
std
::
vector
<
int64_t
>
shifts
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"shifts"
);
std
::
vector
<
int64_t
>
dims
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"
dim
s"
);
std
::
vector
<
int64_t
>
dims
=
context
.
Attr
<
std
::
vector
<
int64_t
>>
(
"
axi
s"
);
std
::
vector
<
T
>
out_vec
;
TensorToVector
(
input
,
context
.
device_context
(),
&
out_vec
);
...
...
This diff is collapsed.
Click to expand it.
python/paddle/fluid/tests/unittests/test_roll_op.py
浏览文件 @
c42d662e
...
...
@@ -28,17 +28,17 @@ class TestRollOp(OpTest):
self
.
op_type
=
"roll"
self
.
init_dtype_type
()
self
.
inputs
=
{
'X'
:
np
.
random
.
random
(
self
.
x_shape
).
astype
(
self
.
dtype
)}
self
.
attrs
=
{
'shifts'
:
self
.
shifts
,
'
dims'
:
self
.
dim
s
}
self
.
attrs
=
{
'shifts'
:
self
.
shifts
,
'
axis'
:
self
.
axi
s
}
self
.
outputs
=
{
'Out'
:
np
.
roll
(
self
.
inputs
[
'X'
],
self
.
attrs
[
'shifts'
],
self
.
attrs
[
'
dim
s'
])
self
.
attrs
[
'
axi
s'
])
}
def
init_dtype_type
(
self
):
self
.
dtype
=
np
.
float64
self
.
x_shape
=
(
100
,
4
,
5
)
self
.
shifts
=
[
101
,
-
1
]
self
.
dim
s
=
[
0
,
-
2
]
self
.
axi
s
=
[
0
,
-
2
]
def
test_check_output
(
self
):
self
.
check_output
()
...
...
@@ -52,7 +52,7 @@ class TestRollOpCase2(TestRollOp):
self
.
dtype
=
np
.
float32
self
.
x_shape
=
(
100
,
10
,
5
)
self
.
shifts
=
[
8
,
-
1
]
self
.
dim
s
=
[
-
1
,
-
2
]
self
.
axi
s
=
[
-
1
,
-
2
]
class
TestRollAPI
(
unittest
.
TestCase
):
...
...
@@ -78,7 +78,7 @@ class TestRollAPI(unittest.TestCase):
# case 2:
with
program_guard
(
Program
(),
Program
()):
x
=
fluid
.
layers
.
data
(
name
=
'x'
,
shape
=
[
-
1
,
3
])
z
=
paddle
.
roll
(
x
,
shifts
=
1
,
dim
s
=
0
)
z
=
paddle
.
roll
(
x
,
shifts
=
1
,
axi
s
=
0
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
res
,
=
exe
.
run
(
feed
=
{
'x'
:
self
.
data_x
},
fetch_list
=
[
z
.
name
],
...
...
@@ -101,12 +101,26 @@ class TestRollAPI(unittest.TestCase):
# case 2:
with
fluid
.
dygraph
.
guard
():
x
=
fluid
.
dygraph
.
to_variable
(
self
.
data_x
)
z
=
paddle
.
roll
(
x
,
shifts
=
1
,
dim
s
=
0
)
z
=
paddle
.
roll
(
x
,
shifts
=
1
,
axi
s
=
0
)
np_z
=
z
.
numpy
()
expect_out
=
np
.
array
([[
7.0
,
8.0
,
9.0
],
[
1.0
,
2.0
,
3.0
],
[
4.0
,
5.0
,
6.0
]])
self
.
assertTrue
(
np
.
allclose
(
expect_out
,
np_z
))
def
test_roll_op_false
(
self
):
self
.
input_data
()
def
test_axis_out_range
():
with
program_guard
(
Program
(),
Program
()):
x
=
fluid
.
layers
.
data
(
name
=
'x'
,
shape
=
[
-
1
,
3
])
z
=
paddle
.
roll
(
x
,
shifts
=
1
,
axis
=
10
)
exe
=
fluid
.
Executor
(
fluid
.
CPUPlace
())
res
,
=
exe
.
run
(
feed
=
{
'x'
:
self
.
data_x
},
fetch_list
=
[
z
.
name
],
return_numpy
=
False
)
self
.
assertRaises
(
ValueError
,
test_axis_out_range
)
if
__name__
==
"__main__"
:
unittest
.
main
()
This diff is collapsed.
Click to expand it.
python/paddle/tensor/manipulation.py
浏览文件 @
c42d662e
...
...
@@ -104,23 +104,24 @@ def flip(input, dims, name=None):
return
out
def
roll
(
input
,
shifts
,
dims
=
None
):
def
roll
(
x
,
shifts
,
axis
=
None
,
name
=
None
):
"""
:alias_main: paddle.roll
:alias: paddle.roll,paddle.tensor.roll,paddle.tensor.manipulation.roll
Roll the `input` tensor along the given dimension(s). Elements that are shifted beyond
the last position are re-introduced at the first position. If a dimension is not specified,
Roll the `x` tensor along the given axis(axes). With specific 'shifts', Elements that
roll beyond the last position are re-introduced at the first according to 'shifts'.
If a axis is not specified,
the tensor will be flattened before rolling and then restored to the original shape.
Args:
input (Variable): The input tensor variable
.
x (Variable): The x tensor variable as input
.
shifts (int|list|tuple): The number of places by which the elements
of the `
input
` tensor are shifted.
dims (int|list|tuple|None): Dimentions
along which to roll.
of the `
x
` tensor are shifted.
axis (int|list|tuple|None): axis(axes)
along which to roll.
Returns:
Variable: A Tensor with same data type as `
input
`.
Variable: A Tensor with same data type as `
x
`.
Examples:
.. code-block:: python
...
...
@@ -131,48 +132,56 @@ def roll(input, shifts, dims=None):
data = np.array([[1.0, 2.0, 3.0],
[4.0, 5.0, 6.0],
[7.0, 8.0, 9.0]])
with fluid.dygraph.guard():
x = fluid.dygraph
.to_variable(data)
out_z1 = paddle.roll(x, shifts=1)
print(out_z1.numpy())
#[[9. 1. 2.]
# [3. 4. 5.]
# [6. 7. 8.]]
out_z2 = paddle.roll(x, shifts=1, dim
s=0)
print(out_z2.numpy())
#[[7. 8. 9.]
# [1. 2. 3.]
# [4. 5. 6.]]
paddle.enable_imperative()
x = paddle.imperative
.to_variable(data)
out_z1 = paddle.roll(x, shifts=1)
print(out_z1.numpy())
#[[9. 1. 2.]
# [3. 4. 5.]
# [6. 7. 8.]]
out_z2 = paddle.roll(x, shifts=1, axi
s=0)
print(out_z2.numpy())
#[[7. 8. 9.]
# [1. 2. 3.]
# [4. 5. 6.]]
"""
helper
=
LayerHelper
(
"roll"
,
**
locals
())
origin_shape
=
input
.
shape
origin_shape
=
x
.
shape
if
type
(
shifts
)
==
int
:
shifts
=
[
shifts
]
if
type
(
dims
)
==
int
:
dims
=
[
dims
]
if
dims
:
check_type
(
dims
,
'dims'
,
(
list
,
tuple
),
'roll'
)
if
type
(
axis
)
==
int
:
axis
=
[
axis
]
len_origin_shape
=
len
(
origin_shape
)
if
axis
:
for
i
in
range
(
len
(
axis
)):
if
axis
[
i
]
>=
len_origin_shape
or
axis
[
i
]
<
-
len_origin_shape
:
raise
ValueError
(
"axis is out of range, it should be in range [{}, {}), but received {}"
.
format
(
-
len_origin_shape
,
len_origin_shape
,
axis
))
if
axis
:
check_type
(
axis
,
'axis'
,
(
list
,
tuple
),
'roll'
)
check_type
(
shifts
,
'shifts'
,
(
list
,
tuple
),
'roll'
)
if
in_dygraph_mode
():
if
dim
s
is
None
:
input
=
core
.
ops
.
reshape
(
input
,
'shape'
,
[
-
1
,
1
])
dim
s
=
[
0
]
out
=
core
.
ops
.
roll
(
input
,
'dims'
,
dim
s
,
'shifts'
,
shifts
)
if
axi
s
is
None
:
x
=
core
.
ops
.
reshape
(
x
,
'shape'
,
[
-
1
,
1
])
axi
s
=
[
0
]
out
=
core
.
ops
.
roll
(
x
,
'axis'
,
axi
s
,
'shifts'
,
shifts
)
return
core
.
ops
.
reshape
(
out
,
'shape'
,
origin_shape
)
out
=
helper
.
create_variable_for_type_inference
(
input
.
dtype
)
out
=
helper
.
create_variable_for_type_inference
(
x
.
dtype
)
if
dim
s
is
None
:
input
=
reshape
(
input
,
shape
=
[
-
1
,
1
])
dim
s
=
[
0
]
if
axi
s
is
None
:
x
=
reshape
(
x
,
shape
=
[
-
1
,
1
])
axi
s
=
[
0
]
helper
.
append_op
(
type
=
'roll'
,
inputs
=
{
'X'
:
input
},
inputs
=
{
'X'
:
x
},
outputs
=
{
'Out'
:
out
},
attrs
=
{
'
dims'
:
dim
s
,
attrs
=
{
'
axis'
:
axi
s
,
'shifts'
:
shifts
})
out
=
reshape
(
out
,
shape
=
origin_shape
,
inplace
=
True
)
return
out
...
...
This diff is collapsed.
Click to expand it.
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录
反馈
建议
客服
返回
顶部