Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
magicwindyyd
mindspore
提交
ab45bec8
M
mindspore
项目概览
magicwindyyd
/
mindspore
与 Fork 源项目一致
Fork自
MindSpore / mindspore
通知
1
Star
1
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
M
mindspore
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
提交
Issue看板
提交
ab45bec8
编写于
8月 22, 2020
作者:
M
mindspore-ci-bot
提交者:
Gitee
8月 22, 2020
浏览文件
操作
浏览文件
下载
差异文件
!4924 Modify API comments and fix error of st
Merge pull request !4924 from byweng/fix_param_check
上级
8ee136db
3422f60d
变更
5
隐藏空白更改
内联
并排
Showing
5 changed file
with
22 addition
and
18 deletion
+22
-18
mindspore/nn/probability/bnn_layers/bnn_cell_wrapper.py
mindspore/nn/probability/bnn_layers/bnn_cell_wrapper.py
+5
-0
mindspore/nn/probability/bnn_layers/conv_variational.py
mindspore/nn/probability/bnn_layers/conv_variational.py
+0
-6
mindspore/nn/probability/bnn_layers/layer_distribution.py
mindspore/nn/probability/bnn_layers/layer_distribution.py
+2
-2
mindspore/nn/probability/transforms/transform_bnn.py
mindspore/nn/probability/transforms/transform_bnn.py
+10
-5
tests/st/probability/test_gpu_vae_gan.py
tests/st/probability/test_gpu_vae_gan.py
+5
-5
未找到文件。
mindspore/nn/probability/bnn_layers/bnn_cell_wrapper.py
浏览文件 @
ab45bec8
...
...
@@ -67,8 +67,13 @@ class WithBNNLossCell:
def
__init__
(
self
,
backbone
,
loss_fn
,
dnn_factor
=
1
,
bnn_factor
=
1
):
if
isinstance
(
dnn_factor
,
bool
)
or
not
isinstance
(
dnn_factor
,
(
int
,
float
)):
raise
TypeError
(
'The type of `dnn_factor` should be `int` or `float`'
)
if
dnn_factor
<
0
:
raise
ValueError
(
'The value of `dnn_factor` should >= 0'
)
if
isinstance
(
bnn_factor
,
bool
)
or
not
isinstance
(
bnn_factor
,
(
int
,
float
)):
raise
TypeError
(
'The type of `bnn_factor` should be `int` or `float`'
)
if
bnn_factor
<
0
:
raise
ValueError
(
'The value of `bnn_factor` should >= 0'
)
self
.
backbone
=
backbone
self
.
loss_fn
=
loss_fn
...
...
mindspore/nn/probability/bnn_layers/conv_variational.py
浏览文件 @
ab45bec8
...
...
@@ -61,12 +61,6 @@ class _ConvVariational(_Conv):
raise
ValueError
(
'Attr
\'
pad_mode
\'
of
\'
Conv2d
\'
Op passed '
+
str
(
pad_mode
)
+
', should be one of values in
\'
valid
\'
,
\'
same
\'
,
\'
pad
\'
.'
)
if
isinstance
(
stride
,
bool
)
or
not
isinstance
(
stride
,
(
int
,
tuple
)):
raise
TypeError
(
'The type of `stride` should be `int` of `tuple`'
)
if
isinstance
(
dilation
,
bool
)
or
not
isinstance
(
dilation
,
(
int
,
tuple
)):
raise
TypeError
(
'The type of `dilation` should be `int` of `tuple`'
)
# convolution args
self
.
in_channels
=
in_channels
self
.
out_channels
=
out_channels
...
...
mindspore/nn/probability/bnn_layers/layer_distribution.py
浏览文件 @
ab45bec8
...
...
@@ -29,7 +29,7 @@ class NormalPrior(Cell):
To initialize a normal distribution of mean 0 and standard deviation 0.1.
Args:
dtype (
class
`mindspore.dtype`): The argument is used to define the data type of the output tensor.
dtype (
:class:
`mindspore.dtype`): The argument is used to define the data type of the output tensor.
Default: mindspore.float32.
mean (int, float): Mean of normal distribution.
std (int, float): Standard deviation of normal distribution.
...
...
@@ -52,7 +52,7 @@ class NormalPosterior(Cell):
Args:
name (str): Name prepended to trainable parameter.
shape (list, tuple): Shape of the mean and standard deviation.
dtype (
class
`mindspore.dtype`): The argument is used to define the data type of the output tensor.
dtype (
:class:
`mindspore.dtype`): The argument is used to define the data type of the output tensor.
Default: mindspore.float32.
loc_mean (int, float): Mean of distribution to initialize trainable parameters. Default: 0.
loc_std (int, float): Standard deviation of distribution to initialize trainable parameters. Default: 0.1.
...
...
mindspore/nn/probability/transforms/transform_bnn.py
浏览文件 @
ab45bec8
...
...
@@ -63,8 +63,13 @@ class TransformToBNN:
def
__init__
(
self
,
trainable_dnn
,
dnn_factor
=
1
,
bnn_factor
=
1
):
if
isinstance
(
dnn_factor
,
bool
)
or
not
isinstance
(
dnn_factor
,
(
int
,
float
)):
raise
TypeError
(
'The type of `dnn_factor` should be `int` or `float`'
)
if
dnn_factor
<
0
:
raise
ValueError
(
'The value of `dnn_factor` should >= 0'
)
if
isinstance
(
bnn_factor
,
bool
)
or
not
isinstance
(
bnn_factor
,
(
int
,
float
)):
raise
TypeError
(
'The type of `bnn_factor` should be `int` or `float`'
)
if
bnn_factor
<
0
:
raise
ValueError
(
'The value of `bnn_factor` should >= 0'
)
net_with_loss
=
trainable_dnn
.
network
self
.
optimizer
=
trainable_dnn
.
optimizer
...
...
@@ -88,9 +93,9 @@ class TransformToBNN:
Transform the whole DNN model to BNN model, and wrap BNN model by TrainOneStepCell.
Args:
get_dense_args (
function
): The arguments gotten from the DNN full connection layer. Default: lambda dp:
get_dense_args (
:class:`function`
): The arguments gotten from the DNN full connection layer. Default: lambda dp:
{"in_channels": dp.in_channels, "out_channels": dp.out_channels, "has_bias": dp.has_bias}.
get_conv_args (
function
): The arguments gotten from the DNN convolutional layer. Default: lambda dp:
get_conv_args (
:class:`function`
): The arguments gotten from the DNN convolutional layer. Default: lambda dp:
{"in_channels": dp.in_channels, "out_channels": dp.out_channels, "pad_mode": dp.pad_mode,
"kernel_size": dp.kernel_size, "stride": dp.stride, "has_bias": dp.has_bias}.
add_dense_args (dict): The new arguments added to BNN full connection layer. Note that the arguments in
...
...
@@ -134,10 +139,10 @@ class TransformToBNN:
Args:
dnn_layer_type (Cell): The type of DNN layer to be transformed to BNN layer. The optional values are
nn.Dense, nn.Conv2d.
nn.Dense, nn.Conv2d.
bnn_layer_type (Cell): The type of BNN layer to be transformed to. The optional values are
DenseReparam
eterization, ConvReparameterization
.
get_args (
dict
): The arguments gotten from the DNN layer. Default: None.
DenseReparam
, ConvReparam
.
get_args (
:class:`function`
): The arguments gotten from the DNN layer. Default: None.
add_args (dict): The new arguments added to BNN layer. Note that the arguments in `add_args` should not
duplicate arguments in `get_args`. Default: None.
...
...
tests/st/probability/test_gpu_vae_gan.py
浏览文件 @
ab45bec8
...
...
@@ -108,22 +108,22 @@ class VaeGan(nn.Cell):
return
ld_real
,
ld_fake
,
ld_p
,
recon_x
,
x
,
mu
,
std
class
VaeGanLoss
(
nn
.
Cell
):
class
VaeGanLoss
(
ELBO
):
def
__init__
(
self
):
super
(
VaeGanLoss
,
self
).
__init__
()
self
.
zeros
=
P
.
ZerosLike
()
self
.
mse
=
nn
.
MSELoss
(
reduction
=
'sum'
)
self
.
elbo
=
ELBO
(
latent_prior
=
'Normal'
,
output_prior
=
'Normal'
)
def
construct
(
self
,
data
,
label
):
ld_real
,
ld_fake
,
ld_p
,
recon_x
,
x
,
m
ean
,
std
=
data
ld_real
,
ld_fake
,
ld_p
,
recon_x
,
x
,
m
u
,
std
=
data
y_real
=
self
.
zeros
(
ld_real
)
+
1
y_fake
=
self
.
zeros
(
ld_fake
)
elbo_data
=
(
recon_x
,
x
,
mean
,
std
)
loss_D
=
self
.
mse
(
ld_real
,
y_real
)
loss_GD
=
self
.
mse
(
ld_p
,
y_fake
)
loss_G
=
self
.
mse
(
ld_fake
,
y_real
)
elbo_loss
=
self
.
elbo
(
elbo_data
,
label
)
reconstruct_loss
=
self
.
recon_loss
(
x
,
recon_x
)
kl_loss
=
self
.
posterior
(
'kl_loss'
,
'Normal'
,
self
.
zeros
(
mu
),
self
.
zeros
(
mu
)
+
1
,
mu
,
std
)
elbo_loss
=
reconstruct_loss
+
self
.
sum
(
kl_loss
)
return
loss_D
+
loss_G
+
loss_GD
+
elbo_loss
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录