未验证 提交 cb1ed0ff 编写于 作者: C Chen Weihang 提交者: GitHub

Add API jit save load documentation (#2253)

* add English doc rst

* add chinese doc rst
上级 4b8a7d28
......@@ -31,6 +31,7 @@ fluid.dygraph
dygraph/guard.rst
dygraph/InstanceNorm.rst
dygraph/InverseTimeDecay.rst
dygraph/jit.rst
dygraph/Layer.rst
dygraph/LayerList.rst
dygraph/LayerNorm.rst
......@@ -55,4 +56,5 @@ fluid.dygraph
dygraph/to_variable.rst
dygraph/TracedLayer.rst
dygraph/Tracer.rst
dygraph/TranslatedLayer.rst
dygraph/TreeConv.rst
.. _api_fluid_dygraph_TranslatedLayer:
TranslatedLayer
-----------------------
.. autoclass:: paddle.fluid.dygraph.TranslatedLayer
:members:
:noindex:
===
jit
===
.. toctree::
:maxdepth: 1
jit/save.rst
jit/load.rst
jit/SaveLoadConfig.rst
.. _api_fluid_dygraph_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
.. autoclass:: paddle.fluid.dygraph.jit.SaveLoadConfig
:members:
:noindex:
\ No newline at end of file
.. _api_fluid_dygraph_jit_load:
load
------------
.. autofunction:: paddle.fluid.dygraph.jit.load
:noindex:
.. _api_fluid_dygraph_jit_save:
save
------------
.. autofunction:: paddle.fluid.dygraph.jit.save
:noindex:
......@@ -13,6 +13,7 @@ paddle.imperative
imperative/grad.rst
imperative/guard.rst
imperative/InverseTimeDecay.rst
imperative/jit.rst
imperative/load.rst
imperative/NaturalExpDecay.rst
imperative/no_grad.rst
......@@ -25,3 +26,4 @@ paddle.imperative
imperative/save.rst
imperative/to_variable.rst
imperative/TracedLayer.rst
imperative/TranslatedLayer.rst
.. _api_imperative_TranslatedLayer:
TranslatedLayer
-------------------------------
:doc_source: paddle.fluid.dygraph.io.TranslatedLayer
===
jit
===
.. toctree::
:maxdepth: 1
jit/save.rst
jit/load.rst
jit/SaveLoadConfig.rst
.. _api_imperative_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.SaveLoadConfig
.. _api_imperative_jit_load:
load
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.load
.. _api_imperative_jit_save:
save
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.save
......@@ -28,6 +28,7 @@ fluid.dygraph
dygraph_cn/guard_cn.rst
dygraph_cn/InstanceNorm_cn.rst
dygraph_cn/InverseTimeDecay_cn.rst
dygraph_cn/jit_cn.rst
dygraph_cn/LambdaDecay_cn.rst
dygraph_cn/Layer_cn.rst
dygraph_cn/LayerList_cn.rst
......@@ -55,4 +56,5 @@ fluid.dygraph
dygraph_cn/to_variable_cn.rst
dygraph_cn/TracedLayer_cn.rst
dygraph_cn/Tracer_cn.rst
dygraph_cn/TranslatedLayer_cn.rst
dygraph_cn/TreeConv_cn.rst
.. _cn_api_fluid_dygraph_TranslatedLayer:
TranslatedLayer
-------------------------------
.. py:class:: paddle.fluid.dygraph.TranslatedLayer(programs, persistable_vars)
``TranslatedLayer`` 是一个命令式编程模式 :ref:`cn_api_fluid_dygraph_Layer` 的继承类,
通过 :ref:`cn_api_fluid_dygraph_jit_load` 载入构建。能够像一般 ``Layer`` 一样在train或者eval模式下使用。
.. note::
``TranslatedLayer`` 对象不能够通过构造函数创建,仅能够通过 :ref:`cn_api_fluid_dygraph_jit_load` 接口载入构建。
**示例代码:**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 1. 训练存储模型.
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
# 2. 载入模型构建TranslatedLayer
translated_layer = fluid.dygraph.jit.load(model_path)
# 预测
translated_layer.eval()
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = translated_layer(x)
# fine-tune训练
translated_layer.train()
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=translated_layer.parameters())
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = translated_layer(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
translated_layer.clear_gradients()
===
jit
===
.. toctree::
:maxdepth: 1
jit_cn/save_cn.rst
jit_cn/load_cn.rst
jit_cn/SaveLoadConfig_cn.rst
.. _cn_api_fluid_dygraph_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
.. py:class:: paddle.fluid.dygraph.jit.SaveLoadConfig()
用于配置接口 :ref:`cn_api_fluid_dygraph_jit_save` 和 :ref:`cn_api_fluid_dygraph_jit_load` 存储载入 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 时的附加选项。
**示例代码:**
1. 在存储模型时使用 ``SaveLoadConfig``
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
# 在存储模型时使用SaveLoadConfig
model_path = "simplenet.example.model"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
2. 在载入模型时使用 ``SaveLoadConfig``
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
# 开启命令式编程模式
fluid.enable_dygraph()
# 在载入模型时使用SaveLoadconfig
model_path = "simplenet.example.model"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
# 预测
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
属性
::::::::::::
.. py:attribute:: output_spec
选择保存模型( :ref:`cn_api_fluid_dygraph_TranslatedLayer` )的输出变量,通过指定的这些变量能够使模型仅计算特定的结果。
默认情况下,原始 :ref:`cn_api_fluid_dygraph_Layer` 的forward方法的所有返回变量都将配置为存储后模型 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 的输出变量。
``output_spec`` 属性类型需要是 ``list[Variable]``。如果输入的 ``output_spec`` 列表不是原始 :ref:`cn_api_fluid_dygraph_Layer` 的forward方法的所有返回变量,
将会依据输入的 ``output_spec`` 列表对存储的模型进行裁剪。
.. note::
``output_spec`` 属性仅在存储模型时使用。
**示例代码:**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
loss = fluid.layers.mean(z)
return z, loss
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out, loss = net(x)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
# 使用SaveLoadconfig.output_spec
model_path = "simplenet.example.model.output_spec"
configs = fluid.dygraph.jit.SaveLoadConfig()
# 仅在存储模型中保留预测结果,丢弃loss
configs.output_spec = [out]
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
# 仅有预测结果输出
pred = infer_net(x)
.. py:attribute:: model_filename
存储转写 :ref:`cn_api_fluid_dygraph_Layer` 模型结构 ``Program`` 的文件名称。默认文件名为 ``__model__``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.model_filename"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.model_filename = "__simplenet__"
# 配置configs.model_filename存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# __simplenet__ __variables__ __variables.info__
# 配置configs.model_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. py:attribute:: params_filename
存储转写 :ref:`cn_api_fluid_dygraph_Layer` 所有持久参数(包括 ``Parameters`` 和持久的 ``Buffers``)的文件名称。默认文件名称为 ``__variable__``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.params_filename"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.params_filename = "__params__"
# 配置configs.params_filename存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# __model__ __params__ __variables.info__
# 配置configs.params_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. py:attribute:: separate_params
配置是否将 :ref:`cn_api_fluid_dygraph_Layer` 的参数存储为分散的文件。
(这是为了兼容接口 :ref:`cn_api_fluid_io_save_inference_model` 的行为)
如果设置为 ``True`` ,每个参数将会被存储为一个文件,文件名为参数名,同时``SaveLoadConfig.params_filename`` 指定的文件名将不会生效。默认为 ``False``。
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
class SimpleNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(SimpleNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
y = self._linear(x)
z = self._linear(y)
return z
# 开启命令式编程模式
fluid.enable_dygraph()
# 训练模型
net = SimpleNet(8, 8)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
for i in range(10):
out = net(x)
loss = fluid.layers.mean(out)
loss.backward()
adam.minimize(loss)
net.clear_gradients()
model_path = "simplenet.example.model.separate_params"
configs = fluid.dygraph.jit.SaveLoadConfig()
configs.separate_params = True
# 配置configs.separate_params存储模型
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[x],
configs=configs)
# [结果] 存储模型目录文件包括:
# linear_0.b_0 linear_0.w_0 __model__ __variables.info__
# 配置configs.params_filename载入模型
infer_net = fluid.dygraph.jit.load(model_path, configs=configs)
x = fluid.dygraph.to_variable(np.random.random((4, 8)).astype('float32'))
pred = infer_net(x)
.. _cn_api_fluid_dygraph_jit_load:
load
-----------------
.. py:function:: paddle.fluid.dygraph.jit.load(model_path, configs=None)
:api_attr: 命令式编程模式(动态图)
将接口 :ref:`cn_api_fluid_dygraph_jit_save` 或者 :ref:`cn_api_fluid_io_save_inference_model` 存储的模型载入为 :ref:`cn_api_fluid_dygraph_TranslatedLayer` ,用于预测推理或者fine-tune训练。
.. note::
由于一些历史原因,如果载入的模型是通过 :ref:`cn_api_fluid_io_save_inference_model` 存储的,
在使用它进行fine-tune训练时会存在一些局限:
1. 命令式编程模式不支持 ``LoDTensor`` ,所有原先输入变量或者参数依赖于LoD信息的模型暂时无法使用;
2. 所有存储模型的feed变量都需要被传入 ``Translatedlayer`` 的forward方法;
3. 原模型变量的 ``stop_gradient`` 信息已丢失且无法准确恢复;
4. 原模型参数的 ``trainable`` 信息已丢失且无法准确恢复。
参数:
- **model_path** (str) - 存储模型的目录。
- **configs** (SaveLoadConfig, 可选) - 用于指定额外配置选项的 :ref:`cn_api_fluid_dygraph_jit_SaveLoadConfig` 对象。默认为 ``None``。
返回:TranslatedLayer - 一个能够执行存储模型的 ``Layer`` 对象。
**示例代码**
1. 载入由接口 :ref:`cn_api_fluid_dygraph_jit_save` 存储的模型进行预测推理及fine-tune训练。
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 1. 训练存储模型.
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
# 2. 载入模型 & 预测
# 载入模型
infer_net = fluid.dygraph.jit.load(model_path)
# 预测
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = infer_net(x)
# 3. 载入模型 & fine-tune训练
# 载入模型
train_net = fluid.dygraph.jit.load(model_path)
train_net.train()
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=train_net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# fine-tune训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = train_net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
train_net.clear_gradients()
2. 载入由接口 :ref:`cn_api_fluid_io_save_inference_model` 存储的模型进行预测推理及fine-tune训练。
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
img = fluid.data(name='img', shape=[None, 784], dtype='float32')
label = fluid.data(name='label', shape=[None, 1], dtype='int64')
pred = fluid.layers.fc(input=img, size=10, act='softmax')
loss = fluid.layers.cross_entropy(input=pred, label=label)
avg_loss = fluid.layers.mean(loss)
optimizer = fluid.optimizer.SGD(learning_rate=0.001)
optimizer.minimize(avg_loss)
place = fluid.CPUPlace()
exe = fluid.Executor(place)
exe.run(fluid.default_startup_program())
loader = fluid.io.DataLoader.from_generator(
feed_list=[img, label], capacity=5, iterable=True)
loader.set_batch_generator(random_batch_reader(), places=place)
# 1. 训练 & 存储预测模型
for data in loader():
exe.run(
fluid.default_main_program(),
feed=data,
fetch_list=[avg_loss])
model_path = "fc.example.model"
fluid.io.save_inference_model(
model_path, ["img"], [pred], exe)
# 开启命令式编程模式
fluid.enable_dygraph()
# 2. 载入模型 & 预测
fc = fluid.dygraph.jit.load(model_path)
x = fluid.dygraph.to_variable(np.random.random((1, 784)).astype('float32'))
pred = fc(x)
# 3. 载入模型 & fine-tune训练
fc = fluid.dygraph.jit.load(model_path)
fc.train()
sgd = fluid.optimizer.SGD(learning_rate=0.001,
parameter_list=fc.parameters())
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(
random_batch_reader(), places=place)
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = fc(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
sgd.minimize(avg_loss)
.. _cn_api_fluid_dygraph_jit_save:
save
-----------------
.. py:function:: paddle.fluid.dygraph.jit.save(layer, model_path, input_spec=None, configs=None)
将输入的经过 ``@declarative`` 装饰的 :ref:`cn_api_fluid_dygraph_Layer` 存储为 :ref:`cn_api_fluid_dygraph_TranslatedLayer` 格式的模型,
载入后可用于预测推理或者fine-tune训练。
该接口将会将输入 :ref:`cn_api_fluid_dygraph_Layer` 转写后的模型结构 ``Program`` 和所有必要的持久参数变量存储至输入路径 ``model_path`` 中。
默认存储的 ``Program`` 文件名为 ``__model__``, 默认存储持久参数变量的文件名为 ``__variables__``,
同时会将变量的一些描述信息存储至文件 ``__variables.info__``,这些额外的信息将在fine-tune训练中使用。
存储的模型能够被以下API载入使用:
- :ref:`cn_api_fluid_dygraph_jit_load`
- :ref:`cn_api_fluid_io_load_inference_model` (需要配置参数 ``params_filename='__variables__'`` )
- 其他预测库API
参数:
- **layer** (Layer) - 需要存储的 :ref:`cn_api_fluid_dygraph_Layer` 对象。输入的 ``Layer`` 需要经过 ``@declarative`` 装饰。
- **model_path** (str) - 存储模型的目录。
- **input_spec** (list[Variable], 可选) - 描述存储模型的输入。此参数是传入当前存储的 ``TranslatedLayer`` forward方法的一个示例输入。如果为 ``None`` ,所有原 ``Layer`` forward方法的输入变量将都会被配置为存储模型的输入变量。默认为 ``None``。
- **configs** (SaveLoadConfig, 可选) - 用于指定额外配置选项的 :ref:`cn_api_fluid_dygraph_jit_SaveLoadConfig` 对象。默认为 ``None``。
返回:无
**示例代码**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.fluid.dygraph import declarative
BATCH_SIZE = 32
BATCH_NUM = 20
def random_batch_reader():
def _get_random_images_and_labels(image_shape, label_shape):
image = np.random.random(size=image_shape).astype('float32')
label = np.random.random(size=label_shape).astype('int64')
return image, label
def __reader__():
for _ in range(BATCH_NUM):
batch_image, batch_label = _get_random_images_and_labels(
[BATCH_SIZE, 784], [BATCH_SIZE, 1])
yield batch_image, batch_label
return __reader__
class LinearNet(fluid.dygraph.Layer):
def __init__(self, in_size, out_size):
super(LinearNet, self).__init__()
self._linear = Linear(in_size, out_size)
@declarative
def forward(self, x):
return self._linear(x)
# 开启命令式编程模式
fluid.enable_dygraph()
# 创建网络
net = LinearNet(784, 1)
adam = fluid.optimizer.AdamOptimizer(learning_rate=0.1, parameter_list=net.parameters())
# 创建DataLoader
train_loader = fluid.io.DataLoader.from_generator(capacity=5)
train_loader.set_batch_generator(random_batch_reader())
# 训练
for data in train_loader():
img, label = data
label.stop_gradient = True
cost = net(img)
loss = fluid.layers.cross_entropy(cost, label)
avg_loss = fluid.layers.mean(loss)
avg_loss.backward()
adam.minimize(avg_loss)
net.clear_gradients()
# 存储模型
model_path = "linear.example.model"
fluid.dygraph.jit.save(
layer=net,
model_path=model_path,
input_spec=[img])
......@@ -13,6 +13,7 @@ paddle.imperative
imperative_cn/grad_cn.rst
imperative_cn/guard_cn.rst
imperative_cn/InverseTimeDecay_cn.rst
imperative_cn/jit_cn.rst
imperative_cn/load_cn.rst
imperative_cn/load_dygraph_cn.rst
imperative_cn/NaturalExpDecay_cn.rst
......@@ -27,3 +28,4 @@ paddle.imperative
imperative_cn/save_dygraph_cn.rst
imperative_cn/to_variable_cn.rst
imperative_cn/TracedLayer_cn.rst
imperative_cn/TranslatedLayer_cn.rst
.. _cn_api_imperative_TranslatedLayer:
TranslatedLayer
-------------------------------
:doc_source: paddle.fluid.dygraph.io.TranslatedLayer
===
jit
===
.. toctree::
:maxdepth: 1
jit_cn/save_cn.rst
jit_cn/load_cn.rst
jit_cn/SaveLoadConfig_cn.rst
\ No newline at end of file
.. _cn_api_imperative_jit_SaveLoadConfig:
SaveLoadConfig
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.SaveLoadConfig
\ No newline at end of file
.. _cn_api_imperative_jit_load:
load
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.load
\ No newline at end of file
.. _cn_api_imperative_jit_save:
save
-------------------------------
:doc_source: paddle.fluid.dygraph.jit.save
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册