未验证 提交 c9819a90 编写于 作者: L LielinJiang 提交者: GitHub

Update hapi Model cn doc (#2615)

* update hapi Model doc

* add summary

* mv Model_cn.rst to model dir
上级 e183faaf
.. _cn_api_paddle_incubate_hapi_model_Model:
.. _cn_api_paddle_Model:
Model
-------------------------------
.. py:class:: paddle.incubate.hapi.model.Model()
.. py:class:: paddle.Model()
``Model`` 对象是一个具备训练、测试、推理的神经网络。该对象同时支持静态图和动态图模式,通过 ``fluid.enable_dygraph()`` 来切换。需要注意的是,该开关需要在实例化 ``Model`` 对象之前使用。 在静态图模式下,输入需要使用 ``hapi.Input`` 来定义。
``Model`` 对象是一个具备训练、测试、推理的神经网络。该对象同时支持静态图和动态图模式,通过 ``paddle.disable_static()`` 来切换。需要注意的是,该开关需要在实例化 ``Model`` 对象之前使用。 在静态图模式下,输入需要使用 ``paddle.static.InputSpec`` 来定义。
**代码示例**
.. code-block:: python
import numpy as np
import paddle
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, Input, set_device
from paddle.incubate.hapi.loss import CrossEntropy
from paddle.incubate.hapi.datasets import MNIST
from paddle.incubate.hapi.metrics import Accuracy
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 10, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
device = set_device('cpu')
import paddle.nn as nn
from paddle.static import InputSpec
# 切换成动态图模式,默认使用静态图模式
fluid.enable_dygraph(device)
device = paddle.set_device('cpu') # or 'gpu'
# if use static graph, do not set
paddle.disable_static(device)
model = MyModel()
optim = fluid.optimizer.SGD(learning_rate=1e-3,
parameter_list=model.parameters())
net = nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10))
inputs = [Input([None, 784], 'float32', name='x')]
labels = [Input([None, 1], 'int64', name='label')]
mnist_data = MNIST(mode='train', chw_format=False)
# inputs and labels are not required for dynamic graph.
input = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(net, input, label)
optim = paddle.optimizer.SGD(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim,
CrossEntropy(average=True),
Accuracy(),
inputs,
labels,
device=device)
model.fit(mnist_data, epochs=2, batch_size=32, verbose=1)
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
data = paddle.vision.datasets.MNIST(mode='train', chw_format=False)
model.fit(data, epochs=2, batch_size=32, verbose=1)
.. py:function:: train_batch(inputs, labels=None)
......@@ -57,43 +47,32 @@ Model
- **inputs** (list) - 1维列表,每个元素都是一批次的输入数据,数据类型为 ``numpy.ndarray``
- **labels** (list) - 1维列表,每个元素都是一批次的输入标签,数据类型为 ``numpy.ndarray`` 。默认值:None
返回:一个列表,包含了训练损失函数的值,如果定义了评估函数,还会包含评估函数得到的指标
返回:如果没有定义评估函数,则返回包含了训练损失函数的值的列表;如果定义了评估函数,则返回一个元组(损失函数的列表,评估指标的列表)
返回类型:list
**代码示例**
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.fluid.dygraph import Linear
from paddle.incubate.hapi.loss import CrossEntropy
from paddle.incubate.hapi.model import Model, Input, set_device
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = Linear(784, 10, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
device = set_device('cpu')
fluid.enable_dygraph(device)
model = MyModel()
optim = fluid.optimizer.SGD(learning_rate=1e-3,
parameter_list=model.parameters())
inputs = [Input([None, 784], 'float32', name='x')]
labels = [Input([None, 1], 'int64', name='label')]
model.prepare(optim,
CrossEntropy(average=True),
inputs=inputs,
labels=labels,
device=device)
import paddle
import paddle.nn as nn
from paddle.static import InputSpec
device = paddle.set_device('cpu') # or 'gpu'
paddle.disable_static(device)
net = nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10))
input = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(net, input, label)
optim = paddle.optimizer.SGD(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim, paddle.nn.CrossEntropyLoss())
data = np.random.random(size=(4,784)).astype(np.float32)
label = np.random.randint(0, 10, size=(4, 1)).astype(np.int64)
loss = model.train_batch([data], [label])
......@@ -107,7 +86,7 @@ Model
- **inputs** (list) - 1维列表,每个元素都是一批次的输入数据,数据类型为 ``numpy.ndarray``
- **labels** (list) - 1维列表,每个元素都是一批次的输入标签,数据类型为 ``numpy.ndarray`` 。默认值:None
返回:一个列表,包含了评估损失函数的值,如果定义了评估函数,还会包含评估函数得到的指标
返回:如果没有定义评估函数,则返回包含了预测损失函数的值的列表;如果定义了评估函数,则返回一个元组(损失函数的列表,评估指标的列表)
返回类型:list
......@@ -116,33 +95,25 @@ Model
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.incubate.hapi.loss import CrossEntropy
from paddle.incubate.hapi.model import Model, Input, set_device
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 10, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
device = set_device('cpu')
fluid.enable_dygraph(device)
model = MyModel()
optim = fluid.optimizer.SGD(learning_rate=1e-3,
parameter_list=model.parameters())
inputs = [Input([None, 784], 'float32', name='x')]
labels = [Input([None, 1], 'int64', name='label')]
import paddle
import paddle.nn as nn
from paddle.static import InputSpec
device = paddle.set_device('cpu') # or 'gpu'
paddle.disable_static(device)
net = nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10))
input = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(net, input, label)
optim = paddle.optimizer.SGD(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim,
CrossEntropy(average=True),
inputs=inputs,
labels=labels,
device=device)
paddle.nn.CrossEntropyLoss())
data = np.random.random(size=(4,784)).astype(np.float32)
label = np.random.randint(0, 10, size=(4, 1)).astype(np.int64)
loss = model.eval_batch([data], [label])
......@@ -164,36 +135,33 @@ Model
.. code-block:: python
import numpy as np
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, Input, set_device
import paddle
import paddle.nn as nn
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 1, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
device = paddle.set_device('cpu') # or 'gpu'
paddle.disable_static(device)
device = set_device('cpu')
fluid.enable_dygraph(device)
net = nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10),
nn.Softmax())
model = MyModel()
inputs = [Input([None, 784], 'float32', name='x')]
model.prepare(inputs=inputs,
device=device)
model = paddle.Model(net)
model.prepare()
data = np.random.random(size=(4,784)).astype(np.float32)
out = model.eval_batch([data])
out = model.test_batch([data])
print(out)
.. py:function:: save(path):
.. py:function:: save(path, training=True):
将模型的参数和训练过程中优化器的信息保存到指定的路径所有的模型参数都会保存到一个后缀为 ``.pdparams`` 的文件中。
所有的优化器信息和相关参数,比如 ``Adam`` 优化器中的 ``beta1`` ``beta2`` ``momentum`` 等,都会被保存到后缀为 ``.pdopt``
的文件中
将模型的参数和训练过程中优化器的信息保存到指定的路径,以及推理所需的参数与文件。如果training=True所有的模型参数都会保存到一个后缀为 ``.pdparams`` 的文件中。
所有的优化器信息和相关参数,比如 ``Adam`` 优化器中的 ``beta1`` ``beta2`` ``momentum`` 等,都会被保存到后缀为 ``.pdopt``。如果优化器比如SGD没有参数,则该不会产生该文件。如果training=False,则不会保存上述说的文件。只会保存推理需要的参数文件和模型文件。
需要注意的是,保存推理模型的参数文件和模型文件时,需要在 ``forward`` 上添加 ``@paddle.jit.to_static`` 函数在动态图模式下
参数:
- **path** (str) - 保存的文件名前缀。格式如 ``dirname/file_prefix`` 或者 ``file_prefix``
- **training** (bool,可选) - 是否保存训练的状态,包括模型参数和优化器参数等。如果为False,则只保存推理所需的参数与文件。默认值:True
返回:None
......@@ -201,25 +169,43 @@ Model
.. code-block:: python
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, set_device
class MyModel(Model):
import paddle
import paddle.nn as nn
from paddle.static import InputSpec
class Mnist(nn.Layer):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 1, act='softmax')
super(Mnist, self).__init__()
self.net = nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10),
nn.Softmax())
# If save for inference in dygraph, need this
@paddle.jit.to_static
def forward(self, x):
y = self._fc(x)
return y
device = set_device('cpu')
fluid.enable_dygraph(device)
model = MyModel()
model.save('checkpoint/test')
return self.net(x)
dynamic = True # False
device = paddle.set_device('cpu')
# if use static graph, do not set
paddle.disable_static(device) if dynamic else None
# inputs and labels are not required for dynamic graph.
input = InputSpec([None, 784], 'float32', 'x')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(Mnist(), input, label)
optim = paddle.optimizer.SGD(learning_rate=1e-3,
parameters=model.parameters())
model.prepare(optim, paddle.nn.CrossEntropyLoss())
data = paddle.vision.datasets.MNIST(mode='train', chw_format=False)
model.fit(data, epochs=1, batch_size=32, verbose=0)
model.save('checkpoint/test') # save for training
model.save('inference_model', False) # save for inference
.. py:function:: load(path, skip_mismatch=False, reset_optimizer=False):
从指定的文件中载入模型参数和优化器参数,如果不想恢复优化器参数信息,优化器信息文件可以不存在。
从指定的文件中载入模型参数和优化器参数,如果不想恢复优化器参数信息,优化器信息文件可以不存在。需要注意的是:参数名称的检索是根据保存模型时结构化的名字,当想要载入参数进行迁移学习时要保证预训练模型和当前的模型的参数有一样结构化的名字。
参数:
- **path** (str) - 保存参数或优化器信息的文件前缀。格式如 ``path.pdparams`` 或者 ``path.pdopt`` ,后者是非必要的,如果不想恢复优化器信息。
......@@ -232,20 +218,18 @@ Model
.. code-block:: python
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, set_device
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 1, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
import paddle
import paddle.nn as nn
device = set_device('cpu')
fluid.enable_dygraph(device)
model = MyModel()
device = paddle.set_device('cpu')
paddle.disable_static(device)
model = paddle.Model(nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10),
nn.Softmax()))
model.save('checkpoint/test')
model.load('checkpoint/test')
.. py:function:: parameters(*args, **kwargs):
......@@ -257,36 +241,27 @@ Model
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, Input, set_device
import paddle
import paddle.nn as nn
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(20, 10, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
paddle.disable_static()
fluid.enable_dygraph()
model = MyModel()
model = paddle.Model(nn.Sequential(
nn.Linear(784, 200),
nn.Tanh(),
nn.Linear(200, 10)))
params = model.parameters()
.. py:function:: prepare(optimizer=None, loss_function=None, metrics=None, inputs=None, labels=None, device=None):
.. py:function:: prepare(optimizer=None, loss_function=None, metrics=None):
返回一个包含模型所有参数的列表
配置模型所需的部件,比如优化器、损失函数和评价指标
参数:
- **optimizer** (Optimizer) - 当训练模型的,该参数必须被设定。当评估或测试的时候,该参数可以不设定。默认值:None
- **loss_function** (Loss) - 当训练模型的,该参数必须被设定。默认值:None
- **metrics** (Metric|list[Metric]) - 当该参数被设定时,所有给定的评估方法会在训练和测试时被运行,并返回对应的指标。默认值:None
- **inputs** (Input|list[Input]|dict) - 网络的输入,对于静态图,该参数必须给定。默认值:None
- **labels** (Input|list[Input]|dict) - 标签,网络的输入。对于静态图,在训练和评估时该参数必须给定。默认值:None
- **device** (str|fluid.CUDAPlace|fluid.CPUPlace|None) - 网络运行的设备,当不指定时,会根据环境和安装的 ``paddle`` 自动选择。默认值:None
返回:None
.. py:function:: fit(train_data=None, eval_data=None, batch_size=1, epochs=1, eval_freq=1, log_freq=10, save_dir=None, save_freq=1, verbose=2, drop_last=False, shuffle=True, num_workers=0, callbacks=None):
......@@ -314,34 +289,28 @@ Model
.. code-block:: python
# 1. 使用Dataset训练,并设置batch_size的例子。
import paddle.fluid as fluid
from paddle.incubate.hapi.model import Model, Input, set_device
from paddle.incubate.hapi.loss import CrossEntropy
from paddle.incubate.hapi.metrics import Accuracy
from paddle.incubate.hapi.datasets import MNIST
from paddle.incubate.hapi.vision.models import LeNet
import paddle
from paddle.static import InputSpec
dynamic = True
device = set_device('cpu')
fluid.enable_dygraph(device) if dynamic else None
device = paddle.set_device('cpu') # or 'gpu'
paddle.disable_static(device) if dynamic else None
train_dataset = MNIST(mode='train')
val_dataset = MNIST(mode='test')
train_dataset = paddle.vision.datasets.MNIST(mode='train')
val_dataset = paddle.vision.datasets.MNIST(mode='test')
inputs = [Input([None, 1, 28, 28], 'float32', name='image')]
labels = [Input([None, 1], 'int64', name='label')]
input = InputSpec([None, 1, 28, 28], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = LeNet()
optim = fluid.optimizer.Adam(
learning_rate=0.001, parameter_list=model.parameters())
model = paddle.Model(
paddle.vision.models.LeNet(classifier_activation=None),
input, label)
optim = paddle.optimizer.Adam(
learning_rate=0.001, parameters=model.parameters())
model.prepare(
optim,
CrossEntropy(),
Accuracy(topk=(1, 2)),
inputs=inputs,
labels=labels,
device=device)
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy(topk=(1, 2)))
model.fit(train_dataset,
val_dataset,
epochs=2,
......@@ -350,36 +319,31 @@ Model
# 2. 使用Dataloader训练的例子.
from paddle.incubate.hapi.model import Model, Input, set_device
from paddle.incubate.hapi.loss import CrossEntropy
from paddle.incubate.hapi.metrics import Accuracy
from paddle.incubate.hapi.datasets import MNIST
from paddle.incubate.hapi.vision.models import LeNet
import paddle
from paddle.static import InputSpec
dynamic = True
device = set_device('cpu')
fluid.enable_dygraph(device) if dynamic else None
device = paddle.set_device('cpu') # or 'gpu'
paddle.disable_static(device) if dynamic else None
train_dataset = MNIST(mode='train')
train_loader = fluid.io.DataLoader(train_dataset,
train_dataset = paddle.vision.datasets.MNIST(mode='train')
train_loader = paddle.io.DataLoader(train_dataset,
places=device, batch_size=64)
val_dataset = MNIST(mode='test')
val_loader = fluid.io.DataLoader(val_dataset,
val_dataset = paddle.vision.datasets.MNIST(mode='test')
val_loader = paddle.io.DataLoader(val_dataset,
places=device, batch_size=64)
inputs = [Input([None, 1, 28, 28], 'float32', name='image')]
labels = [Input([None, 1], 'int64', name='label')]
input = InputSpec([None, 1, 28, 28], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = LeNet()
optim = fluid.optimizer.Adam(
learning_rate=0.001, parameter_list=model.parameters())
model = paddle.Model(
paddle.vision.models.LeNet(classifier_activation=None), input, label)
optim = paddle.optimizer.Adam(
learning_rate=0.001, parameters=model.parameters())
model.prepare(
optim,
CrossEntropy(),
Accuracy(topk=(1, 2)),
inputs=inputs,
labels=labels,
device=device)
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy(topk=(1, 2)))
model.fit(train_loader,
val_loader,
epochs=2,
......@@ -388,7 +352,7 @@ Model
.. py:function:: evaluate(eval_data, batch_size=1, log_freq=10, verbose=2, num_workers=0, callbacks=None):
评估模型
在输入数据上,评估模型的损失函数值和评估指标
参数:
- **eval_data** (Dataset|DataLoader) - 一个可迭代的数据源,推荐给定一个 ``paddle paddle.io.Dataset`` ``paddle.io.Dataloader`` 的实例。默认值:None
......@@ -405,39 +369,30 @@ Model
.. code-block:: python
# declarative mode
import numpy as np
from paddle.incubate.hapi.metrics import Accuracy
from paddle.incubate.hapi.datasets import MNIST
from paddle.incubate.hapi.vision.transforms import Compose,Resize
from paddle.incubate.hapi.vision.models import LeNet
from paddle.incubate.hapi.model import Input, set_device
inputs = [Input([-1, 1, 28, 28], 'float32', name='image')]
labels = [Input([None, 1], 'int64', name='label')]
val_dataset = MNIST(mode='test')
import paddle
from paddle.static import InputSpec
model = LeNet()
model.prepare(metrics=Accuracy(), inputs=inputs, labels=labels)
# declarative mode
val_dataset = paddle.vision.datasets.MNIST(mode='test')
input = InputSpec([-1, 1, 28, 28], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = paddle.Model(paddle.vision.models.LeNet(), input, label)
model.prepare(metrics=paddle.metric.Accuracy())
result = model.evaluate(val_dataset, batch_size=64)
print(result)
# imperative mode
import paddle.fluid.dygraph as dg
place = set_device('cpu')
with dg.guard(place) as g:
model = LeNet()
model.prepare(metrics=Accuracy(), inputs=inputs, labels=labels)
result = model.evaluate(val_dataset, batch_size=64)
print(result)
paddle.disable_static()
model = paddle.Model(paddle.vision.models.LeNet())
model.prepare(metrics=paddle.metric.Accuracy())
result = model.evaluate(val_dataset, batch_size=64)
print(result)
.. py:function:: predict(test_data, batch_size=1, num_workers=0, stack_outputs=False, callbacks=None):
模型预测
在输入数据上,预测模型的输出
参数:
- **test_data** (Dataset|DataLoader) - 一个可迭代的数据源,推荐给定一个 ``paddle paddle.io.Dataset`` ``paddle.io.Dataloader`` 的实例。默认值:None
......@@ -454,13 +409,10 @@ Model
# declarative mode
import numpy as np
from paddle.incubate.hapi.metrics import Accuracy
from paddle.incubate.hapi.datasets import MNIST
from paddle.incubate.hapi.vision.transforms import Compose,Resize
from paddle.incubate.hapi.vision.models import LeNet
from paddle.incubate.hapi.model import Input, set_device
import paddle
from paddle.static import InputSpec
class MnistDataset(MNIST):
class MnistDataset(paddle.vision.datasets.MNIST):
def __init__(self, mode, return_label=True):
super(MnistDataset, self).__init__(mode=mode)
self.return_label = return_label
......@@ -474,57 +426,57 @@ Model
def __len__(self):
return len(self.images)
inputs = [Input([-1, 1, 28, 28], 'float32', name='image')]
test_dataset = MnistDataset(mode='test', return_label=False)
model = LeNet()
model.prepare(inputs=inputs)
# declarative mode
input = InputSpec([-1, 1, 28, 28], 'float32', 'image')
model = paddle.Model(paddle.vision.models.LeNet(), input)
model.prepare()
result = model.predict(test_dataset, batch_size=64)
print(result)
print(len(result[0]), result[0][0].shape)
# imperative mode
import paddle.fluid.dygraph as dg
place = set_device('cpu')
with dg.guard(place) as g:
model = LeNet()
model.prepare(inputs=inputs)
result = model.predict(test_dataset, batch_size=64)
print(result)
device = paddle.set_device('cpu')
paddle.disable_static(device)
model = paddle.Model(paddle.vision.models.LeNet())
model.prepare()
result = model.predict(test_dataset, batch_size=64)
print(len(result[0]), result[0][0].shape)
.. py:function:: save_inference_model(save_dir, model_filename=None, params_filename=None, model_only=False):
.. py:function:: summary(input_size=None, batch_size=None, dtype=None):
模型预测
打印网络的基础结构和参数信息
参数:
- **save_dir** (str) - 保存推理模型的路径。
- **model_filename** (str,可选) - 保存预测模型结构 ``Inference Program`` 的文件名称。若设置为None,则使用 ``__model__`` 作为默认的文件名。默认值:None
- **params_filename** (str,可选) - 保存预测模型所有相关参数的文件名称。若设置为None,则模型参数被保存在单独的文件中。
- **model_only** (bool,可选) - 若为True,则只保存预测模型的网络结构,而不保存预测模型的网络参数。默认值:False
- **input_size** (tuple|InputSpec|list[tuple|InputSpec,可选) - 输入张量的大小。如果网络只有一个输入,那么该值需要设定为tupleInputSpec。如果模型有多个输入。那么该值需要设定为list[tuple|InputSpec],包含每个输入的shape。如果该值没有设置,会将 ``self._inputs`` 作为输入。默认值:None
- **batch_size** (int,可选) - 输入张量的批大小。默认值:None
- **dtypes** (str,可选) - 输入张量的数据类型,如果没有给定,默认使用 ``float32`` 类型。默认值:None
返回:None
返回:字典:包含网络全部参数的大小和全部可训练参数的大小。
**代码示例**
.. code-block:: python
import paddle.fluid as fluid
import paddle
from paddle.static import InputSpec
from paddle.incubate.hapi.model import Model, Input
dynamic = True
device = paddle.set_device('cpu')
paddle.disable_static(device) if dynamic else None
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self._fc = fluid.dygraph.Linear(784, 1, act='softmax')
def forward(self, x):
y = self._fc(x)
return y
input = InputSpec([None, 1, 28, 28], 'float32', 'image')
label = InputSpec([None, 1], 'int64', 'label')
model = MyModel()
inputs = [Input([-1, 1, 784], 'float32', name='input')]
model.prepare(inputs=inputs)
model = paddle.Model(paddle.vision.LeNet(classifier_activation=None),
input, label)
optim = paddle.optimizer.Adam(
learning_rate=0.001, parameters=model.parameters())
model.prepare(
optim,
paddle.nn.CrossEntropyLoss())
model.save_inference_model('checkpoint/test')
\ No newline at end of file
params_info = model.summary()
print(params_info)
\ No newline at end of file
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册