未验证 提交 3e91a34d 编写于 作者: C Chen Long 提交者: GitHub

add_tutorial test=develop (#2582)

* add_tutorial test=develop

* fix quick start test=develop
上级 7b69a251
使用卷积神经网络进行图像分类
============================
本示例教程将会演示如何使用飞桨的卷积神经网络来完成图像分类任务。这是一个较为简单的示例,将会使用一个由三个卷积层组成的网络完成\ `cifar10 <https://www.cs.toronto.edu/~kriz/cifar.html>`__\ 数据集的图像分类任务。
设置环境
----------
我们将使用飞桨2.0beta版本。
.. code:: ipython3
import paddle
import paddle.nn.functional as F
from paddle.vision.transforms import Normalize
import numpy as np
import matplotlib.pyplot as plt
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d
加载并浏览数据集
-------------------
我们将会使用飞桨提供的API完成数据集的下载并为后续的训练任务准备好数据迭代器。cifar10数据集由60000张大小为32
\*
32的彩色图片组成,其中有50000张图片组成了训练集,另外10000张图片组成了测试集。这些图片分为10个类别,我们的任务是训练一个模型能够把图片进行正确的分类。
.. code:: ipython3
cifar10_train = paddle.vision.datasets.cifar.Cifar10(mode='train', transform=None)
train_images = np.zeros((50000, 32, 32, 3), dtype='float32')
train_labels = np.zeros((50000, 1), dtype='int32')
for i, data in enumerate(cifar10_train):
train_image, train_label = data
train_image = train_image.reshape((3, 32, 32 )).astype('float32') / 255.
train_image = train_image.transpose(1, 2, 0)
train_images[i, :, :, :] = train_image
train_labels[i, 0] = train_label
浏览数据集
-------------
接下来我们从数据集中随机挑选一些图片并显示,从而对数据集有一个直观的了解。
.. code:: ipython3
class_names = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
plt.figure(figsize=(10,10))
sample_idxs = np.random.choice(50000, size=25, replace=False)
for i in range(25):
plt.subplot(5, 5, i+1)
plt.xticks([])
plt.yticks([])
plt.imshow(train_images[sample_idxs[i]], cmap=plt.cm.binary)
plt.xlabel(class_names[train_labels[sample_idxs[i]][0]])
plt.show()
.. image:: convnet_image_classification_files/convnet_image_classification_6_0.png
组建网络
----------
接下来我们使用飞桨定义一个使用了三个二维卷积(\ ``Conv2d``)且每次卷积之后使用\ ``relu``\ 激活函数,两个二维池化层(\ ``MaxPool2d``\ ),和两个线性变换层组成的分类网络,来把一个\ ``(32, 32, 3)``\ 形状的图片通过卷积神经网络映射为10个输出,这对应着10个分类的类别。
.. code:: ipython3
class MyNet(paddle.nn.Layer):
def __init__(self, num_classes=1):
super(MyNet, self).__init__()
self.conv1 = paddle.nn.Conv2d(in_channels=3, out_channels=32, kernel_size=(3, 3))
self.pool1 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)
self.conv2 = paddle.nn.Conv2d(in_channels=32, out_channels=64, kernel_size=(3,3))
self.pool2 = paddle.nn.MaxPool2d(kernel_size=2, stride=2)
self.conv3 = paddle.nn.Conv2d(in_channels=64, out_channels=64, kernel_size=(3,3))
self.flatten = paddle.nn.Flatten()
self.linear1 = paddle.nn.Linear(in_features=1024, out_features=64)
self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool1(x)
x = self.conv2(x)
x = F.relu(x)
x = self.pool2(x)
x = self.conv3(x)
x = F.relu(x)
x = self.flatten(x)
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
return x
模型训练
--------
接下来,我们用一个循环来进行模型的训练,我们将会: -
使用\ ``paddle.optimizer.Adam``\ 优化器来进行优化。 -
使用\ ``F.softmax_with_cross_entropy``\ 来计算损失值。 -
使用\ ``paddle.io.DataLoader``\ 来加载数据并组建batch
.. code:: ipython3
epoch_num = 10
batch_size = 32
learning_rate = 0.001
.. code:: ipython3
val_acc_history = []
val_loss_history = []
def train(model):
print('start training ... ')
# turn into training mode
model.train()
opt = paddle.optimizer.Adam(learning_rate=learning_rate,
parameters=model.parameters())
train_loader = paddle.io.DataLoader(cifar10_train,
places=paddle.CPUPlace(),
shuffle=True,
batch_size=batch_size)
cifar10_test = paddle.vision.datasets.cifar.Cifar10(mode='test', transform=None)
valid_loader = paddle.io.DataLoader(cifar10_test, places=paddle.CPUPlace(), batch_size=batch_size)
for epoch in range(epoch_num):
for batch_id, data in enumerate(train_loader()):
x_data = paddle.cast(data[0], 'float32')
x_data = paddle.reshape(x_data, (-1, 3, 32, 32)) / 255.0
y_data = paddle.cast(data[1], 'int64')
y_data = paddle.reshape(y_data, (-1, 1))
logits = model(x_data)
loss = F.softmax_with_cross_entropy(logits, y_data)
avg_loss = paddle.mean(loss)
if batch_id % 1000 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
# evaluate model after one epoch
model.eval()
accuracies = []
losses = []
for batch_id, data in enumerate(valid_loader()):
x_data = paddle.cast(data[0], 'float32')
x_data = paddle.reshape(x_data, (-1, 3, 32, 32)) / 255.0
y_data = paddle.cast(data[1], 'int64')
y_data = paddle.reshape(y_data, (-1, 1))
logits = model(x_data)
loss = F.softmax_with_cross_entropy(logits, y_data)
acc = paddle.metric.accuracy(logits, y_data)
accuracies.append(np.mean(acc.numpy()))
losses.append(np.mean(loss.numpy()))
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("[validation] accuracy/loss: {}/{}".format(avg_acc, avg_loss))
val_acc_history.append(avg_acc)
val_loss_history.append(avg_loss)
model.train()
model = MyNet(num_classes=10)
train(model)
.. parsed-literal::
start training ...
epoch: 0, batch_id: 0, loss is: [2.3024805]
epoch: 0, batch_id: 1000, loss is: [1.1422595]
[validation] accuracy/loss: 0.5575079917907715/1.2516425848007202
epoch: 1, batch_id: 0, loss is: [0.9350736]
epoch: 1, batch_id: 1000, loss is: [1.3825703]
[validation] accuracy/loss: 0.5959464907646179/1.1320706605911255
epoch: 2, batch_id: 0, loss is: [0.979844]
epoch: 2, batch_id: 1000, loss is: [0.87730503]
[validation] accuracy/loss: 0.6607428193092346/0.9754576086997986
epoch: 3, batch_id: 0, loss is: [0.7345351]
epoch: 3, batch_id: 1000, loss is: [1.0982555]
[validation] accuracy/loss: 0.6671326160430908/0.9667007327079773
epoch: 4, batch_id: 0, loss is: [0.9291839]
epoch: 4, batch_id: 1000, loss is: [1.1812104]
[validation] accuracy/loss: 0.6895966529846191/0.9075900316238403
epoch: 5, batch_id: 0, loss is: [0.5072213]
epoch: 5, batch_id: 1000, loss is: [0.60360587]
[validation] accuracy/loss: 0.6944888234138489/0.8740479350090027
epoch: 6, batch_id: 0, loss is: [0.5917944]
epoch: 6, batch_id: 1000, loss is: [0.7963876]
[validation] accuracy/loss: 0.7072683572769165/0.8597638607025146
epoch: 7, batch_id: 0, loss is: [0.50116754]
epoch: 7, batch_id: 1000, loss is: [0.95844793]
[validation] accuracy/loss: 0.700579047203064/0.876727819442749
epoch: 8, batch_id: 0, loss is: [0.87496114]
epoch: 8, batch_id: 1000, loss is: [0.68749857]
[validation] accuracy/loss: 0.7198482155799866/0.8403064608573914
epoch: 9, batch_id: 0, loss is: [0.8548105]
epoch: 9, batch_id: 1000, loss is: [0.6488569]
[validation] accuracy/loss: 0.7106629610061646/0.874437153339386
.. code:: ipython3
plt.plot(val_acc_history, label = 'validation accuracy')
plt.xlabel('Epoch')
plt.ylabel('Accuracy')
plt.ylim([0.5, 0.8])
plt.legend(loc='lower right')
.. parsed-literal::
<matplotlib.legend.Legend at 0x163d6ec50>
.. image:: convnet_image_classification_files/convnet_image_classification_12_1.png
The End
-------
从上面的示例可以看到,在cifar10数据集上,使用简单的卷积神经网络,用飞桨可以达到71%以上的准确率。
因为 它太大了无法显示 source diff 。你可以改为 查看blob
基于图片相似度的图片搜索
========================
简要介绍
--------
图片搜索是一种有着广泛的应用场景的深度学习技术的应用,目前,无论是工程图纸的检索,还是互联网上相似图片的搜索,都基于深度学习算法能够实现很好的基于给定图片,检索出跟该图片相似的图片的效果。
本示例简要介绍如何通过飞桨开源框架,实现图片搜索的功能。其基本思路是,先将图片使用卷积神经网络转换为高维空间的向量表示,然后计算两张图片的高维空间的向量表示之间的相似程度(本示例中,我们使用余弦相似度)。在模型训练阶段,其训练目标是让同一类别的图片的相似程度尽可能的高,不同类别的图片的相似程度尽可能的低。在模型预测阶段,对于用户上传的一张图片,会计算其与图片库中图片的相似程度,返回给用户按照相似程度由高到低的图片的列表作为检索的结果。
环境设置
--------
本示例基于飞桨开源框架2.0版本。
.. code:: ipython3
import paddle
import paddle.nn.functional as F
import numpy as np
import random
import matplotlib.pyplot as plt
from PIL import Image
from collections import defaultdict
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
数据集
------
本示例采用\ `CIFAR-10 <https://www.cs.toronto.edu/~kriz/cifar.html>`__\ 数据集。这是一个经典的数据集,由50000张图片的训练数据,和10000张图片的测试数据组成,其中每张图片是一个RGB的长和宽都为32的图片。使用\ ``paddle.dataset.cifar``\ 可以方便的完成数据的下载工作,把数据归一化到\ ``(0, 1.0)``\ 区间内,并提供迭代器供按顺序访问数据。我们会把训练数据和测试数据分别存放在两个\ ``numpy``\ 数组中,供后面的训练和评估来使用。
.. code:: ipython3
cifar10_train = paddle.vision.datasets.cifar.Cifar10(mode='train', transform=None)
x_train = np.zeros((50000, 3, 32, 32))
y_train = np.zeros((50000, 1), dtype='int32')
for i in range(len(cifar10_train)):
train_image, train_label = cifar10_train[i]
train_image = train_image.reshape((3,32,32 ))
# normalize the data
x_train[i,:, :, :] = train_image / 255.
y_train[i, 0] = train_label
y_train = np.squeeze(y_train)
print(x_train.shape)
print(y_train.shape)
.. parsed-literal::
(50000, 3, 32, 32)
(50000,)
.. code:: ipython3
cifar10_test = paddle.vision.datasets.cifar.Cifar10(mode='test', transform=None)
x_test = np.zeros((10000, 3, 32, 32), dtype='float32')
y_test = np.zeros((10000, 1), dtype='int64')
for i in range(len(cifar10_test)):
test_image, test_label = cifar10_test[i]
test_image = test_image.reshape((3,32,32 ))
# normalize the data
x_test[i,:, :, :] = test_image / 255.
y_test[i, 0] = test_label
y_test = np.squeeze(y_test)
print(x_test.shape)
print(y_test.shape)
.. parsed-literal::
(10000, 3, 32, 32)
(10000,)
数据探索
--------
接下来我们随机从训练数据里找一些图片,浏览一下这些图片。
.. code:: ipython3
height_width = 32
def show_collage(examples):
box_size = height_width + 2
num_rows, num_cols = examples.shape[:2]
collage = Image.new(
mode="RGB",
size=(num_cols * box_size, num_rows * box_size),
color=(255, 255, 255),
)
for row_idx in range(num_rows):
for col_idx in range(num_cols):
array = (np.array(examples[row_idx, col_idx]) * 255).astype(np.uint8)
array = array.transpose(1,2,0)
collage.paste(
Image.fromarray(array), (col_idx * box_size, row_idx * box_size)
)
collage = collage.resize((2 * num_cols * box_size, 2 * num_rows * box_size))
return collage
sample_idxs = np.random.randint(0, 50000, size=(5, 5))
examples = x_train[sample_idxs]
show_collage(examples)
.. image:: image_search_files/image_search_8_0.png
构建训练数据
--------------
图片检索的模型的训练样本跟我们常见的分类任务的训练样本不太一样的地方在于,每个训练样本并不是一个\ ``(image, class)``\ 这样的形式。而是(image0,
image1,
similary_or_not)的形式,即,每一个训练样本由两张图片组成,而其\ ``label``\ 是这两张图片是否相似的标志位(0或者1)。
很自然的我们能够想到,来自同一个类别的两张图片,是相似的图片,而来自不同类别的两张图片,应该是不相似的图片。
为了能够方便的抽样出相似图片(以及不相似图片)的样本,我们先建立能够根据类别找到该类别下所有图片的索引。
.. code:: ipython3
class_idx_to_train_idxs = defaultdict(list)
for y_train_idx, y in enumerate(y_train):
class_idx_to_train_idxs[y].append(y_train_idx)
class_idx_to_test_idxs = defaultdict(list)
for y_test_idx, y in enumerate(y_test):
class_idx_to_test_idxs[y].append(y_test_idx)
有了上面的索引,我们就可以为飞桨准备一个读取数据的迭代器。该迭代器每次生成\ ``2 * number of classes``\ 张图片,在CIFAR10数据集中,这会是20张图片。前10张图片,和后10张图片,分别是10个类别中每个类别随机抽出的一张图片。这样,在实际的训练过程中,我们就会有10张相似的图片和90张不相似的图片(前10张图片中的任意一张图片,都与后10张的对应位置的1张图片相似,而与其他9张图片不相似)。
.. code:: ipython3
num_classes = 10
def reader_creator(num_batchs):
def reader():
iter_step = 0
while True:
if iter_step >= num_batchs:
break
iter_step += 1
x = np.empty((2, num_classes, 3, height_width, height_width), dtype=np.float32)
for class_idx in range(num_classes):
examples_for_class = class_idx_to_train_idxs[class_idx]
anchor_idx = random.choice(examples_for_class)
positive_idx = random.choice(examples_for_class)
while positive_idx == anchor_idx:
positive_idx = random.choice(examples_for_class)
x[0, class_idx] = x_train[anchor_idx]
x[1, class_idx] = x_train[positive_idx]
yield x
return reader
# num_batchs: how many batchs to generate
def anchor_positive_pairs(num_batchs=100):
return reader_creator(num_batchs)
.. code:: ipython3
pairs_train_reader = anchor_positive_pairs(num_batchs=1000)
拿出第一批次的图片,并可视化的展示出来,如下所示。(这样更容易理解训练样本的构成)
.. code:: ipython3
examples = next(pairs_train_reader())
print(examples.shape)
show_collage(examples)
.. parsed-literal::
(2, 10, 3, 32, 32)
.. image:: image_search_files/image_search_15_1.png
把图片转换为高维的向量表示的网络
-----------------------------------
我们的目标是首先把图片转换为高维空间的表示,然后计算图片在高维空间表示时的相似度。
下面的网络结构用来把一个形状为\ ``(3, 32, 32)``\ 的图片转换成形状为\ ``(8,)``\ 的向量。在有些资料中也会把这个转换成的向量称为\ ``Embedding``\ ,请注意,这与自然语言处理领域的词向量的区别。
下面的模型由三个连续的卷积加一个全局均值池化,然后用一个线性全链接层映射到维数为8的向量空间。为了后续计算余弦相似度时的便利,我们还在最后用\ `l2_normalize <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/l2_normalize_cn.html>`__\ 做了归一化。(即,余弦相似度的分母部分)
.. code:: ipython3
class MyNet(paddle.nn.Layer):
def __init__(self):
super(MyNet, self).__init__()
self.conv1 = paddle.nn.Conv2d(in_channels=3,
out_channels=32,
kernel_size=(3, 3),
stride=2)
self.conv2 = paddle.nn.Conv2d(in_channels=32,
out_channels=64,
kernel_size=(3,3),
stride=2)
self.conv3 = paddle.nn.Conv2d(in_channels=64,
out_channels=128,
kernel_size=(3,3),
stride=2)
self.gloabl_pool = paddle.nn.AdaptiveAvgPool2d((1,1))
self.fc1 = paddle.nn.Linear(in_features=128, out_features=8)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.conv2(x)
x = F.relu(x)
x = self.conv3(x)
x = F.relu(x)
x = self.gloabl_pool(x)
x = paddle.squeeze(x, axis=[2, 3])
x = self.fc1(x)
x = F.l2_normalize(x, axis=1)
return x
在模型的训练过程中如下面的代码所示:
- ``inverse_temperature``\ 参数起到的作用是让softmax在计算梯度时,能够处于梯度更显著的区域。(可以参考\ `attention
is all you
need <https://arxiv.org/abs/1706.03762>`__\ 中,在点积之后的\ ``scale``\ 操作)。
- 整个计算过程,会先用上面的网络分别计算前10张图片(anchors)的高维表示,和后10张图片的高维表示。然后再用\ `matmul <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/matmul_cn.html>`__\ 计算前10张图片分别与后10张图片的相似度。(所以\ ``similarities``\ 会是一个\ ``(10, 10)``\ Tensor)。
- \ `softmax_with_cross_entropy <https://www.paddlepaddle.org.cn/documentation/docs/zh/api_cn/layers_cn/softmax_with_cross_entropy_cn.html>`__\ 构造类别标签时,则相应的,可以构造出来0
~
num_classes的标签值,用来让学习的目标成为相似的图片的相似度尽可能的趋向于1.0,而不相似的图片的相似度尽可能的趋向于-1.0
.. code:: ipython3
# 定义训练过程
def train(model):
print('start training ... ')
model.train()
inverse_temperature = paddle.to_tensor(np.array([1.0/0.2], dtype='float32'))
epoch_num = 20
opt = paddle.optimizer.Adam(learning_rate=0.0001,
parameters=model.parameters())
for epoch in range(epoch_num):
for batch_id, data in enumerate(pairs_train_reader()):
anchors_data, positives_data = data[0], data[1]
anchors = paddle.to_tensor(anchors_data)
positives = paddle.to_tensor(positives_data)
anchor_embeddings = model(anchors)
positive_embeddings = model(positives)
similarities = paddle.matmul(anchor_embeddings, positive_embeddings, transpose_y=True)
similarities = paddle.multiply(similarities, inverse_temperature)
sparse_labels = paddle.arange(0, num_classes, dtype='int64')
sparse_labels = paddle.reshape(sparse_labels, (num_classes, 1))
loss = F.softmax_with_cross_entropy(similarities, sparse_labels)
avg_loss = paddle.mean(loss)
if batch_id % 500 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
model = MyNet()
train(model)
.. parsed-literal::
start training ...
epoch: 0, batch_id: 0, loss is: [2.3080945]
epoch: 0, batch_id: 500, loss is: [2.326215]
epoch: 1, batch_id: 0, loss is: [2.0898924]
epoch: 1, batch_id: 500, loss is: [1.8754089]
epoch: 2, batch_id: 0, loss is: [2.2416227]
epoch: 2, batch_id: 500, loss is: [1.9024051]
epoch: 3, batch_id: 0, loss is: [1.841417]
epoch: 3, batch_id: 500, loss is: [2.1239076]
epoch: 4, batch_id: 0, loss is: [1.9291763]
epoch: 4, batch_id: 500, loss is: [2.2363486]
epoch: 5, batch_id: 0, loss is: [2.0078473]
epoch: 5, batch_id: 500, loss is: [2.0765374]
epoch: 6, batch_id: 0, loss is: [2.080376]
epoch: 6, batch_id: 500, loss is: [2.1759136]
epoch: 7, batch_id: 0, loss is: [1.908263]
epoch: 7, batch_id: 500, loss is: [1.7774136]
epoch: 8, batch_id: 0, loss is: [1.6335764]
epoch: 8, batch_id: 500, loss is: [1.5713912]
epoch: 9, batch_id: 0, loss is: [2.287479]
epoch: 9, batch_id: 500, loss is: [1.7719988]
epoch: 10, batch_id: 0, loss is: [1.2894523]
epoch: 10, batch_id: 500, loss is: [1.599735]
epoch: 11, batch_id: 0, loss is: [1.78816]
epoch: 11, batch_id: 500, loss is: [1.4773489]
epoch: 12, batch_id: 0, loss is: [1.6737808]
epoch: 12, batch_id: 500, loss is: [1.8889393]
epoch: 13, batch_id: 0, loss is: [1.6156021]
epoch: 13, batch_id: 500, loss is: [1.3851049]
epoch: 14, batch_id: 0, loss is: [1.3854092]
epoch: 14, batch_id: 500, loss is: [2.0325592]
epoch: 15, batch_id: 0, loss is: [1.9734558]
epoch: 15, batch_id: 500, loss is: [1.8050598]
epoch: 16, batch_id: 0, loss is: [1.7084911]
epoch: 16, batch_id: 500, loss is: [1.8919995]
epoch: 17, batch_id: 0, loss is: [1.3137552]
epoch: 17, batch_id: 500, loss is: [1.8817297]
epoch: 18, batch_id: 0, loss is: [1.9453808]
epoch: 18, batch_id: 500, loss is: [2.1317677]
epoch: 19, batch_id: 0, loss is: [1.6051079]
epoch: 19, batch_id: 500, loss is: [1.779858]
模型预测
--------
前述的模型训练训练结束之后,我们就可以用该网络结构来计算出任意一张图片的高维向量表示(embedding),通过计算该图片与图片库中其他图片的高维向量表示之间的相似度,就可以按照相似程度进行排序,排序越靠前,则相似程度越高。
下面我们对测试集中所有的图片都两两计算相似度,然后选一部分相似的图片展示出来。
.. code:: ipython3
near_neighbours_per_example = 10
x_test_t = paddle.to_tensor(x_test)
test_images_embeddings = model(x_test_t)
similarities_matrix = paddle.matmul(test_images_embeddings, test_images_embeddings, transpose_y=True)
indicies = paddle.argsort(similarities_matrix, descending=True)
indicies = indicies.numpy()
.. code:: ipython3
num_collage_examples = 10
examples = np.empty(
(
num_collage_examples,
near_neighbours_per_example + 1,
3,
height_width,
height_width,
),
dtype=np.float32,
)
for row_idx in range(num_collage_examples):
examples[row_idx, 0] = x_test[row_idx]
anchor_near_neighbours = indicies[row_idx][1:near_neighbours_per_example+1]
for col_idx, nn_idx in enumerate(anchor_near_neighbours):
examples[row_idx, col_idx + 1] = x_test[nn_idx]
show_collage(examples)
.. image:: image_search_files/image_search_22_0.png
The end
-------
上面展示的结果当中,每一行里其余的图片都是跟第一张图片按照相似度进行排序相似的图片。你也可以调整网络结构和超参数,以获得更好的结果。
{
"cells": [
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "ueGUN2EQeScw"
},
"source": [
"# 基于U型语义分割模型实现的宠物图像分割\n",
"\n",
"本示例教程当前是基于2.0-beta版本Paddle做的案例实现,未来会随着2.0的系列版本发布进行升级。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1.简要介绍\n",
"\n",
"在计算机视觉领域,图像分割指的是将数字图像细分为多个图像子区域的过程。图像分割的目的是简化或改变图像的表示形式,使得图像更容易理解和分析。图像分割通常用于定位图像中的物体和边界(线,曲线等)。更精确的,图像分割是对图像中的每个像素加标签的一个过程,这一过程使得具有相同标签的像素具有某种共同视觉特性。图像分割的领域非常多,无人车、地块检测、表计识别等等。\n",
"\n",
"本示例简要介绍如何通过飞桨开源框架,实现图像分割。这里我们是采用了一个在图像分割领域比较熟知的U-Net网络结构,是一个基于FCN做改进后的一个深度学习网络,包含下采样(编码器,特征提取)和上采样(解码器,分辨率还原)两个阶段,因模型结构比较像U型而命名为U-Net。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2.环境设置\n",
"\n",
"导入一些比较基础常用的模块,确认自己的飞桨版本。"
]
},
{
"cell_type": "code",
"execution_count": 1,
"metadata": {},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": "'0.0.0'"
},
"metadata": {},
"execution_count": 1
}
],
"source": [
"import os\n",
"import io\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"from PIL import Image as PilImage\n",
"\n",
"import paddle\n",
"from paddle.nn import functional as F\n",
"\n",
"paddle.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "VMC2xLAxeScx"
},
"source": [
"## 3.数据集"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "H0KiJ_5N936Y"
},
"source": [
"### 3.1 数据集下载\n",
"\n",
"本案例使用Oxford-IIIT Pet数据集,官网:https://www.robots.ox.ac.uk/~vgg/data/pets 。\n",
"\n",
"数据集统计如下:\n",
"\n",
"![alt 数据集统计信息](https://www.robots.ox.ac.uk/~vgg/data/pets/breed_count.jpg)\n",
"\n",
"数据集包含两个压缩文件:\n",
"\n",
"1. 原图:https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz\n",
"2. 分割图像:https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 119
},
"colab_type": "code",
"id": "xJd9y-u9eScy",
"outputId": "3985783f-7166-4afa-f511-16427b3e2a71",
"tags": []
},
"outputs": [],
"source": [
"!curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz\n",
"!curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz\n",
"!tar -xf images.tar.gz\n",
"!tar -xf annotations.tar.gz"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "L5cP2CBz-Mra"
},
"source": [
"### 3.2 数据集概览\n",
"\n",
"首先我们先看看下载到磁盘上的文件结构是什么样,来了解一下我们的数据集。\n",
"\n",
"1. 首先看一下images.tar.gz这个压缩包,该文件解压后得到一个images目录,这个目录比较简单,里面直接放的是用类名和序号命名好的图片文件,每个图片是对应的宠物照片。\n",
"\n",
"```bash\n",
".\n",
"├── samoyed_7.jpg\n",
"├── ......\n",
"└── samoyed_81.jpg\n",
"```\n",
"\n",
"2. 然后我们在看下annotations.tar.gz,文件解压后的目录里面包含以下内容,目录中的README文件将每个目录和文件做了比较详细的介绍,我们可以通过README来查看每个目录文件的说明。\n",
"\n",
"```bash\n",
".\n",
"├── README\n",
"├── list.txt\n",
"├── test.txt\n",
"├── trainval.txt\n",
"├── trimaps\n",
"│   ├── Abyssinian_1.png\n",
"│    ├── Abyssinian_10.png\n",
"│    ├── ......\n",
"│    └── yorkshire_terrier_99.png\n",
"└── xmls\n",
" ├── Abyssinian_1.xml\n",
" ├── Abyssinian_10.xml\n",
" ├── ......\n",
" └── yorkshire_terrier_190.xml\n",
"```\n",
"\n",
"本次我们主要使用到images和annotations/trimaps两个目录,即原图和三元图像文件,前者作为训练的输入数据,后者是对应的标签数据。\n",
"\n",
"我们来看看这个数据集给我们提供了多少个训练样本。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 204
},
"colab_type": "code",
"id": "tqB7YQ4leSc4",
"outputId": "8872356c-ef32-4c94-defb-66250a00890a",
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "用于训练的图片样本数量: 7390\n"
}
],
"source": [
"train_images_path = \"images/\"\n",
"label_images_path = \"annotations/trimaps/\"\n",
"\n",
"print(\"用于训练的图片样本数量:\", len([os.path.join(train_images_path, image_name) \n",
" for image_name in os.listdir(train_images_path) \n",
" if image_name.endswith('.jpg')]))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 数据集类定义\n",
"\n",
"飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) + DataLoader(多进程数据集加载)。\n",
"\n",
"首先我们先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,`__getitem__`和`__len__`:\n",
"\n",
"```python\n",
"class MyDataset(Dataset):\n",
" def __init__(self):\n",
" ...\n",
" \n",
" # 每次迭代时返回数据和对应的标签\n",
" def __getitem__(self, idx):\n",
" return x, y\n",
"\n",
" # 返回整个数据集的总数\n",
" def __len__(self):\n",
" return count(samples)\n",
"```\n",
"\n",
"在数据集内部可以结合图像数据预处理相关API进行图像的预处理(改变大小、反转、调整格式等)。\n",
"\n",
"由于加载进来的图像不一定都符合自己的需求,举个例子,已下载的这些图片里面就会有RGBA格式的图片,这个时候图片就不符合我们所需3通道的需求,我们需要进行图片的格式转换,那么这里我们直接实现了一个通用的图片读取接口,确保读取出来的图片都是满足我们的需求。\n",
"\n",
"另外图片加载出来的默认shape是HWC,这个时候要看看是否满足后面训练的需要,如果Layer的默认格式和这个不是符合的情况下,需要看下Layer有没有参数可以进行格式调整。不过如果layer较多的话,还是直接调整原数据Shape比较好,否则每个layer都要做参数设置,如果有遗漏就会导致训练出错,那么在本案例中是直接对数据源的shape做了统一调整,从HWC转换成了CHW,因为飞桨的卷积等API的默认输入格式为CHW,这样处理方便后续模型训练。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"import random\n",
"\n",
"from paddle.io import Dataset\n",
"from paddle.vision.transforms import transforms\n",
"\n",
"\n",
"class ImgTranspose(object):\n",
" \"\"\"\n",
" 图像预处理工具,用于将Mask图像进行升维(160, 160) => (160, 160, 1),\n",
" 并对图像的维度进行转换从HWC变为CHW\n",
" \"\"\"\n",
" def __init__(self, fmt):\n",
" self.format = fmt\n",
" \n",
" def __call__(self, img):\n",
" if len(img.shape) == 2:\n",
" img = np.expand_dims(img, axis=2)\n",
" \n",
" return img.transpose(self.format)\n",
"\n",
"class PetDataset(Dataset):\n",
" \"\"\"\n",
" 数据集定义\n",
" \"\"\"\n",
" def __init__(self, image_path, label_path, mode='train'):\n",
" \"\"\"\n",
" 构造函数\n",
" \"\"\"\n",
" self.image_size = (160, 160)\n",
" self.image_path = image_path\n",
" self.label_path = label_path\n",
" self.mode = mode.lower()\n",
" self.eval_image_num = 1000\n",
" \n",
" assert self.mode in ['train', 'test'], \\\n",
" \"mode should be 'train' or 'test', but got {}\".format(self.mode)\n",
" \n",
" self._parse_dataset()\n",
" \n",
" self.transforms = transforms.Compose([\n",
" ImgTranspose((2, 0, 1))\n",
" ])\n",
" \n",
" def _sort_images(self, image_dir, image_type):\n",
" \"\"\"\n",
" 对文件夹内的图像进行按照文件名排序\n",
" \"\"\"\n",
" files = []\n",
"\n",
" for image_name in os.listdir(image_dir):\n",
" if image_name.endswith('.{}'.format(image_type)) \\\n",
" and not image_name.startswith('.'):\n",
" files.append(os.path.join(image_dir, image_name))\n",
"\n",
" return sorted(files)\n",
" \n",
" def _parse_dataset(self):\n",
" \"\"\"\n",
" 由于所有文件都是散落在文件夹中,在训练时我们需要使用的是数据集和标签对应的数据关系,\n",
" 所以我们第一步是对原始的数据集进行整理,得到数据集和标签两个数组,分别一一对应。\n",
" 这样可以在使用的时候能够很方便的找到原始数据和标签的对应关系,否则对于原有的文件夹图片数据无法直接应用。\n",
" 在这里是用了一个非常简单的方法,按照文件名称进行排序。\n",
" 因为刚好数据和标签的文件名是按照这个逻辑制作的,名字都一样,只有扩展名不一样。\n",
" \"\"\"\n",
" temp_train_images = self._sort_images(self.image_path, 'jpg')\n",
" temp_label_images = self._sort_images(self.label_path, 'png')\n",
"\n",
" random.Random(1337).shuffle(temp_train_images)\n",
" random.Random(1337).shuffle(temp_label_images)\n",
" \n",
" if self.mode == 'train':\n",
" self.train_images = temp_train_images[:-self.eval_image_num]\n",
" self.label_images = temp_label_images[:-self.eval_image_num]\n",
" else:\n",
" self.train_images = temp_train_images[-self.eval_image_num:]\n",
" self.label_images = temp_label_images[-self.eval_image_num:]\n",
" \n",
" def _load_img(self, path, color_mode='rgb'):\n",
" \"\"\"\n",
" 统一的图像处理接口封装,用于规整图像大小和通道\n",
" \"\"\"\n",
" with open(path, 'rb') as f:\n",
" img = PilImage.open(io.BytesIO(f.read()))\n",
" if color_mode == 'grayscale':\n",
" # if image is not already an 8-bit, 16-bit or 32-bit grayscale image\n",
" # convert it to an 8-bit grayscale image.\n",
" if img.mode not in ('L', 'I;16', 'I'):\n",
" img = img.convert('L')\n",
" elif color_mode == 'rgba':\n",
" if img.mode != 'RGBA':\n",
" img = img.convert('RGBA')\n",
" elif color_mode == 'rgb':\n",
" if img.mode != 'RGB':\n",
" img = img.convert('RGB')\n",
" else:\n",
" raise ValueError('color_mode must be \"grayscale\", \"rgb\", or \"rgba\"')\n",
"\n",
" if self.image_size is not None:\n",
" if img.size != self.image_size:\n",
" img = img.resize(self.image_size, PilImage.NEAREST)\n",
"\n",
" return img\n",
"\n",
" def __getitem__(self, idx):\n",
" \"\"\"\n",
" 返回 image, label\n",
" \"\"\"\n",
" # 花了比较多的时间在数据处理这里,需要处理成模型能适配的格式,踩了一些坑(比如有不是RGB格式的)\n",
" # 有图片会出现通道数和期望不符的情况,需要进行相关考虑\n",
"\n",
" # 加载原始图像\n",
" train_image = self._load_img(self.train_images[idx])\n",
" x = np.array(train_image, dtype='float32')\n",
"\n",
" # 对图像进行预处理,统一大小,转换维度格式(HWC => CHW)\n",
" x = self.transforms(x)\n",
" \n",
" # 加载Label图像\n",
" label_image = self._load_img(self.label_images[idx], color_mode=\"grayscale\") \n",
" y = np.array(label_image, dtype='uint8') \n",
"\n",
" # 图像预处理\n",
" # Label图像是二维的数组(size, size),升维到(size, size, 1)后才能用于最后loss计算\n",
" y = self.transforms(y)\n",
" \n",
" # 返回img, label,转换为需要的格式\n",
" return x, y.astype('int64')\n",
" \n",
" def __len__(self):\n",
" \"\"\"\n",
" 返回数据集总数\n",
" \"\"\"\n",
" return len(self.train_images)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "GYxTHfbBESSG"
},
"source": [
"### 3.4 PetDataSet数据集抽样展示\n",
"\n",
"实现好Dataset数据集后,我们来测试一下数据集是否符合预期,因为Dataset是一个可以被迭代的Class,我们通过for循环从里面读取数据来用matplotlib进行展示,这里要注意的是对于分割的标签文件因为是1通道的灰度图片,需要在使用imshow接口时注意下传参cmap='gray'。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 479
},
"colab_type": "code",
"id": "MTO-C5qFDnPn",
"outputId": "0937ed5e-1216-4773-9b54-16db8fe7b457"
},
"outputs": [
{
"output_type": "display_data",
"data": {
"text/plain": "<Figure size 432x288 with 2 Axes>",
"image/svg+xml": "<?xml version=\"1.0\" encoding=\"utf-8\" standalone=\"no\"?>\n<!DOCTYPE svg PUBLIC \"-//W3C//DTD SVG 1.1//EN\"\n \"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd\">\n<!-- Created with matplotlib (https://matplotlib.org/) -->\n<svg height=\"181.699943pt\" version=\"1.1\" viewBox=\"0 0 349.2 181.699943\" width=\"349.2pt\" xmlns=\"http://www.w3.org/2000/svg\" xmlns:xlink=\"http://www.w3.org/1999/xlink\">\n <defs>\n <style type=\"text/css\">\n*{stroke-linecap:butt;stroke-linejoin:round;}\n </style>\n </defs>\n <g id=\"figure_1\">\n <g id=\"patch_1\">\n <path d=\"M 0 181.699943 \nL 349.2 181.699943 \nL 349.2 0 \nL 0 0 \nz\n\" style=\"fill:none;\"/>\n </g>\n <g id=\"axes_1\">\n <g clip-path=\"url(#p58ad9a7e6d)\">\n <image height=\"153\" id=\"image6a21407320\" transform=\"scale(1 -1)translate(0 -153)\" width=\"153\" x=\"7.2\" xlink:href=\"data:image/png;base64,\niVBORw0KGgoAAAANSUhEUgAAAJkAAACZCAYAAAA8XJi6AAAABHNCSVQICAgIfAhkiAAAIABJREFUeJx0vUuvZUl2HvatiNiP87qPzKysKnWzKTatNgk0YNMUQMCgKcMaGIIhzSRrbI0kAx7YHhmGQcCA/R/kIUH4B/SYgDywKUpNWObAzZdksLurWZVZlY9773nsvSNiebDWioh9MusUbuW955z9ilixHt/61gpiZgYAZi4/OUf8+Z//Jf7ZP/2n+MM//EPsNiOICJkAIgIAeO/RhYCcM3LOYGaEEMoPAKSUkHNGSgnee/RdD0dAYkbOGY4IOSbAETKzXCNn2Mt+DyHAM5Dsb+fhnEPXd2BkvQ4wDAN2ux3unz3HJ59+jtu7W3z66ae4u3+G3X6DcRxx2N8ALPcLMMgRuhDKcy1xwvl8xjAMiDGCiNB1PbwL6LoORA7eB3gv90BEYCwAE7puhPcdyANdGGQsfA/nHch5gKhcx16Pj0f827/8C/w//+b/xr/7t3+Kr776Am/fvcWyzAAnvU+UMQYTUuIyrgAQY9R5k/dSkuNs/OVYIKX84RgzA5C/iQjIDAJkrgH5XyIwMQgM1uOI5D5SBn77t/9j/PN//r/h+9//1dW42LM6Eyw78OnpCb//+/87fvM/+g38+U/+X9zd7OCDAzk5iFQgU4yY57k8kA1E1PftxcxwziHnjHmekGIqN7DEiIUzlhjL8e0gmYA555CTDJZzDsMwYLPZIKeMeZ7LIMuDJx3IhNPpiGk6w3sCMyPGBcfjA+b5gmW5IKUogq7XXeIkg6L3K5PEcORXwsEqpDaRJowpJeS4AFkWas4ZKS7IMYFzM5HlPPbMWe/bgZxbTZQjB0fynvcezhNCcOi6Dn3fw4eAvu/L3/bjvZfFqf+GIIvkY0Jg8woQiAKIHEAEDwJngB0BIDgGPKoMeAL6nvDHf/xj/PCHP8Tv/d7v4d27d6u5Z2aEdlWdz2f8/u//Pv6n//F/wH4cscwzQienjTkj51Q0mE0GESGqkBARnHMAgMvlUs7bdV1ZaZdlRp5ZB5DkYWwAASSIQCzLIoLjPMAZw2YsA+gcsMwzGAnzPIPIYbsVbeu8x3a/xf2zW2w2W4TQYZkjdts9iIG4JIAZzgM+EGKaRRupkNok5Jj0GQnkGAAhpQznMxw6HcCElCMADyIPgMDOI2VCII+cM8gTCAyS2ULKGYEIRA4RjK7v8eKTl/iV7/8qjqcHvH3/dbkHzgRU3QEQIRGDiZFZhCR4D2YPzhmcM5wjhODR9x1ijEi6OOuiyfCeihakxGB9ftPuDICYEXWhOyIwgASAVL0RsyxQBmKc4R3hv/mv/xmmecI/+a/+CYZhUA3pEGxwY4z42U9/iv/1f/mf4ZyYBNabYpaTm1bxvq5s01QAyuCU1aFaQh5MjiHnEADEZSnmI6r2ciBAtYOZX7vO6XTC5XLBMAzYbkfM8wznPYZhwDAMmKYFfd9jHEe8efMGm+0W2+1OBcfpoBKADO97EQgGYlxWK9s5BzAjxojQdyqAAJBlhbP8LVpMhC8EB6L6zERcNCqRg3MZzA7EGQQHBsuYOoe+77Hf77HZbMQyxSQTCJLJx/plmteRg3cO6uuAsxPt4v1qzHPm8neMcSVw9v3EVRCJHYCMnOnqyld3YuNS3iYsy4L//r/97/B3/7O/ix/84AcyXoAIGQA8PDzg7//9/wJxmkRlq8pc0iKmIHPRRiZQJmSmCWylmE9mwmYmsOs60V7MCD4Uk+hVMB1IhE/PK5rElWtsNhuklHC5XGSw9XqXywVdN5Tvv3jxAvf39+i6DtvtFsPQwXuC84RljgB1SEmeQ+7Xg1OG7zoZfPXVCE5NNcN5MYGAU3OSQSB4HwCwPh8h5QVBfVVZcBmZEyiLIIIYmRzI2/gwQh8wjiO22y122y3OxyfVQqLJ7PmLoDknyyWzahkCqztjc2Q+nAmZLFoTQC4Cx8yIWX23mPQYrBSEzaN6aVXsyMRLfulCh5gW/L2/95/jxz/+Y9ze3om5JhIJ/Mu//EtMpxPIEThlRLD4EU6lleRkZlLswvZgzrniy9QBbgbFAobGFEEdfyZZ1akZHDsmpVS0ZyvQ5sfFGGVy9js450WwNlt4/b3rOjjnxP/yHgzxDZkZwzDgfD5jP44g1QBiNlTzOg+AkTkiR11YGSCKADkQJGDpOg/vgJQjiAJyWsSM5SxCkhOYgpg0ynAuiMZwEB/Le4zjiKB+1TiOiMsMTgs4E6qbrpPNMt2qx+BIJpxhLotei8X0VYFzSJmRU4YPHpwZmTNC1ADNJzCLEFZf06lwMZir+bZzEzOIJSjMOQOc8frVK/zkJ3+K3/qt3xJNRkSY5xn/6e/8DvbbDbwnIGfAOThPSOW06kPpTZMJURM4LDGWVWYrKsZYNJsJR1HNRCLUzMiqvjMYXgW5FSoAxQRxFsG7XC4ggpjO8wW393cg7zHPETnLypABznCe9H25J2Jg5kn+zRmd90CKxcQwZYAWMFczKveXxdVQ8+gdIScCsxwbfAcwwKwLiXsw5FlSVpeCHMAZyKRmz4PIYRxHbDYbeO8QgkNcnPiBDmCWcfAgMEn0BzVHxFXHlKjZm5kHOHhkZuRMcBnqkmRkNZPe+TK+rSk1vzhnNNpRfooGs/lhRnYA4LEsC/7O3/kdvH37DofDASHnjP/jX/wLPH/+HGmZVr4FiICcdHVXrWQ3VLSUPpxpuHbVdV1XBNP8hzZAsIFpNSBR9ZFMGC2qzDkjJy6wSMoZ3gfsdzt0XYfQRFvDIDDCsiwABdFmTv2z3JxDhTcE09Aowm1a1e7RtHXRKgAyMTyrJues2k6c8BgXuMBwzACCmHgQXLPwAAYRcDmf8e7dO5zPZ8zzLNchCRzkeq4iATrBOqgrc3oNk5if6MiJwGZG8BrZ5gwODPCHQmYLvQYPjJyT/mR1OdpouSqFrgv4gz/4A/yDf/APEGKM+Mf/+B/hcNgXB7869TaY8litSSy4WuujacTh9OZsQOxBnRNn1R6i/bz1wTy5IgAm0JvNpgjmPEWklCQwcB7juMEwjtjtdvjkxSd4+fIl7u7vixNM1JWHl4FM8FSDFLu2QAq8Eu6cM7quK460DWQ7RkRUtAJhAReNr4EAEzITPNWwPnOGgy0mJzhWWWRU/BybQvvb8EQHgFiQqxa7sn9bP85MLAjIJKYNENelA8SEqi9tiz3nVHy4lLKa0YyUHFKq77c+n/3Y+PyX/+gf4vHpiOC9x+3tLXKOooaZQXrXibOulmom68qgIhT2kolxQHFqZXjaVdHekA1gbnA6MSupnLed7MvlLOau+GkBLnTwoUPoOuz3e3R9h3mZcb6cELoDco5YFqDrNoD6mSkmMHHx1RhcgOXiP6oJBwg5iVbKjhB80PtKGmXaOHj1HwmEoMIq36Pk4dUtIOeQOcFlAjsufu5mM+L+7h43N7d4enyH6VKDHvG1xDn25CUwaQSLyJnXpOPGEDFEq2jALILlWEBdhmmVqq2L9soJXmGRVoullJFyQk4ZLTxy/QMAfS+4XPjRj36kbzo4D6QY4VnsPkGdW/U0i3CoRruOegCRR7uIDRBUuEwzFcHU6Ig8IbbnaGAVM7kVx3JYFkaMGUQJ4zAgI8N5CRwYQOg6mWCOyAwsMeMyEZwjLEtETgm+70UASHTEMs8IfV8EO86Lmt+u4kcNCEtlPMQxds7re4Y0VWiHc0aODATxcC3Q4SyYF5FEZuO4reaQxISJD+p0xcpsEAgQpAX2rupi+RqrdBGqniPSRa/3XnQjChTSZgJ8CMUsgvFRQZIIOBXFYShCxeSAH/3oR/B/8id/8rvT5byyp845gBp/C9W0tdmBVtCuTaP5AeVRuPwPOgfV/DIjN6q2YECN/0Y6SNVPEiT7cHODrutwc3OD7XaLw80N7u/vMY4DgvcgJ4FNToK620C0EbLdhw8VVhF4wsN7EyKNspr7MOECKograyeJgDiolrFjJPqCI4lO9YfBWOYZX3/9NV599dc4HY+Ies8g0VrVVzaEXv+jOqbOXftl9fvreTGfG5DHsu+56tbAAh4Byr3zCAqaWwbhY7/Xfx1CCPiXf/RHCH/2Z3+Gl588XwlJttDYiT/BWQdOb5QB0QaNo28mpn3llEFcQVtZwlUgqVlB16bYVkcrzMwSScqEStjvFNDcbrc4HA7Y7XYFNGYIThSXGZfTCeM4luAhaiRs17AI2DSuKwCuwikQvy6nrOPDBZUnX4MXG7fEEYhitkSjMzwEU3Ps1EeqC7brO+x3O+x3B3Rdj34YkJZFoBodW1nMBhslEHn52wmMYD51WSYNrIHm3/Yl46vngwdYrU2WeyfnkMFgx3AKQmfVjy3w2+Kkoskkwv+Lv/gLhO1mXGNTEFBUJUPVv7wnOI9Ifpkgu1msH8iiSedd0QwOkOiLSCEG1sFzIKqwhqHufd+DmXE4HHA6HQv+JIEDoRsGZAY22x1ubm+x3e3gvVMhEj9KXA5GXM6YKcNrHs6ASu9tEWj2gpwAnLnNPABECSBCilF9H4KDA5NMtj1zNZEAfGuCogK81bSBAXay+Lq+x92z53j2/CXev3+LlBbM04yYosAhBQvlAp9Uy+BV49H6faom8dsEDFAo1DAwE3yX4Vj8NTIr5mQmHct5Ja3UEiuyQiUJOXfgzHjx/B5hu92swvL2xjjL0sgsoCKcK6ujrHgiRclRLiiTJv9aNESaSuHMYEfCTCgwQdVigrCL1jRfbJomxChAIeeq3fqhx2a7x83NLTabTTlHzhEp5+KvIEd0ISDFBfN0wTBsyrVFG1NZlXAKz9j954ycHZgngAT9lzFwyJTBiHAOxafs+x4xJnjXlXEomjlnsEf17XIGuwwiLzDM4QYvXrzEN998hePxAY/v38sTrOAJVCFSYS+rhCrozWi/96FglQgUAKfGrTHzS+KSlKwCNWcrXyV4VBfGewlKUhYgmhKje/4MwTQSUMHOVkhSSgKYqvAVTWWRYvOeo5qhN6e9TKblNIsZy2ut11yv86FENUSE8/ksT5VFCwTfFZhkt9thHEcsizjqKSXMiznAOkA5Iae5mNwQegRep6wK0AvCUpLmxshIClVknXSLqLkAm1CBTjnDqw+Zc0bo3ErQXLPyzdclQvFt+nFEyozzNCHlJHhrXs/Jhy+upkTvw3xpXd6Kq314iAlc8asL/FF91m8T1Bb6EetgEa4XReEkK+G+7QTmbHtfhbDFY8yPKZQUtz5VYTOoCYnGfVqWwtoAPoRFWozHhNWA35wzOo32NpsNQKR0o1gS5SEEFWhR2zFKNDlNc/W9iDAvC5gqqp1SAscE1myA+RVl2iiAyMPSLC3u5tWE2ABdL5prf8VMM+dWaBh9P2C/2+Pu/h7b7VYS7x+ZG72KarVcHXlUraJ3vfrXAi4TQiq/lcf8IIAr1KDr6zdphho8ETon3D/nHJx36LogCfJiKjRctQNFnRK882oy9T4VQ7ILF4Q4hHLLhqyXFeyc5ODU0bT35boK4jrhTtnkFCpRmisoCsb+sBcHfl5wPh9xPo94enoCABwOe+ScscwLHLEEcgAcROCXnBFyxtBpXtTrmueMOTMCBXhICN9GjeRlEDmz+mu5DCyrv+ScQ5wXEfLAyOxAQTSu8wCTmtBiJllBVVlEQxdwczig77oCQzjykspaCRo3v9vilgDAnP8qVQp8qMbEh3qlkTEVOQtnIXQmOaeQFldSD0kL5hIKiNsETwjkinIIQGVWBk1bmCksgsBcMu6F5qOmxlYoAFDOAhvoU7YIudyf0Hfg1uxMqOC2qapxHEvKJ/MCRxL99X2PZVmQAXz3l76L7W6Hvu8xDAOICPMywxFEMFNEcB7IQo0Jviv3l1JG72tEKYi3JNNd8ICTxWURtGggeY6krgCrd78sC7Jvvqv/835NEqhRcvVbW63NBPjgcDw+4nQ+Fq3XLlQ5TiWBDT9rhbAK2MqXYwKYyuet5ms1YZ2Q5jvUwr+tjBX7LK6JZurtOS2DEOzmvfdAqimidtLJzuca+EFvoE1FfCxVtMJkIBDAkuKVWq+TYEK8LAsulwu894hLwjhWKs/zl5/AeY95WbBpzBIAeOcxL2ekFOGpwfHIwTWQSx86dM7XqFInsMAfihsVn22pxMwKVxhSnhFjBZzbScscRT/wWFI3Zg1WPxKuiaY3AuSV6WrH00BgkcIP538tMFfOfiNo1+7JB4fqe+b8X39GJNCGeaXtMTauwSaOgMKsaDGfVbSYuEAYdgLBlLyatVQnu9VicgKFLdSsxCponDJy45PZZIcQCi15ThFdEOrO5XIRnljfI3QB+8MW+8NWtdxUgo+UU2GODEMPy0AwMxInUBTNaGNn92wOMKBUGBfgGIhpKdE3q6kQn65OmPHmZPwqNSolRhcki9F3vboD6qYQaRbFY+i3+OTF38AnL77E+ekBcV7AuaZ9WqGTC1+9BzTjTh985gMJHtbM7fV32pfJRtL7XH+/vuXIQRA0Kr5mmUsb2JhSMZfXN2wn5LoExBZnXkVL8hChJMHXjqJgLiJUqZyb9eYF1vAg1lXTRKiu7+GIMG5G+L4rE9n3veTQYsZ0mRDnWfJqMYIoin/ne4S+QwaQmODJlYgveQY7oB/GmnBXYclJ7supwCQkpLxAsj1ONWyGDwExLgq3CByRUoIQHvXZjaER5Rpi7yzpHJWWI77qsBkx7kYB2ljZE+5b5qUZ3g8EEOpjkZNxX8EZRhn/NhVo0aplMFrTSwUYbrWtMNS9HOcdjGhARAhW9OFcLRYxE1rVuWJOJGYkmY8TPDzUbyOC7zpAheP64Y2BUXjntgLUue+HQc1pTY4DRoRj3N3fo+/7Alm8fPlSuGqKpR2fnoT/7wFwFEH2EoHFSOi6QaqN4DBPEX3vMPgRIYgzbmbffAl7z7RusmRxYrjgyoKySTVz07KCa8bCF1Np40uEoqVdZngn2jE44O72Bnf3z/H+3TdwLuF8OpUoeK196u9mps0KWaKckWR9N6aMRTy+RcCg6imWIKNCNvXztem2c9b78t6e00moYQxSO6hNZBeaNSrY6i2nmWqmwCtKb3+bT2eRZ05S9CGDxLAwKPhqFpm58M9Mk0kelTBNUkn08PAA5xxevXqF0/mEx8fHlXAQALCwBeY5YZ4t1eM1DydRo+XpTCicc1iWZeW02mdCRgwAPJwPxQ8j/cysgUXUtshYiZNGB7Jn+iDoySylaHqOzWaH29t7nM4XnM7nWlTzLZrneqxDEX6JCtfycYXzlQxChSJEoXgQBQDuo9dtQV+iSmhtn8nOFSx1A5QAtCRHS/Rj8ENzcos+PVCot2Zy7Hvt6hLnm5FyLnWXRIQlKzPTAVFNSFl1juD6Drf7Pe7u7tB1HTabDeZ5lrKwPoCYkaJo46enR3BO8MTwzqPf7tD3Uj4XQleSt1ZK1vUjYmIMvVREjX2HnBNc18N4UoBhXYDvenmOrMn2HOUeEbRsLQjAmsXPhKfCtEhpAdCtimMMpjBBIQJc8KAwgILD3f0dkCccnx4wz40mK35hDZpaQahCrGjYletTrksZUoMkaSmwCVa9LxG2gpCWqLposua85uYUQdRfi+P/sVcRDiVAtr6BPRgTVpScktgGIWaL+GraqgUq5fxipkOnEe2Syrk61TZW3zmOo2g3CoXEuMwzaBFsSnJwWUq5OldMlEEifT8WzeOVMdv3fVkIswKxobdJybAxawtnbBEx1zxrzhnzMotP2kaYTRK85ci1/Drnqt/rnMfNzQHeEU7HI87nCSnV2tUygWhA1mau2ijxW7WfsWOITARUgABmdXUowwDAKjfropbqDnzoM7b3FVrJM4FwqNqnlHa5dRhcByug04ruxDVV5BgIWpSaIQ7+0pgKmQiBNbxCCUbBCV78I3iHzWaD25sbEYJ5xmazwTRN2PKAFAVIJgjAGe2hg4frOjgX4JxH3w/FPBYh05+a3nLwvtK+r/3TVTTMAkzaeOac1R/0xSFnZoS+k6iL6+JaO+7VkxG/xoFI8sHPnj3Hy08/gwPjzTevSvFLIydQMGuV2lvJ0rUGU0ExAiSp9gKtPxehEY9OJMytzvWt2NrVde334qWGIGG6DVqrcUgwyCJwzjmNJmqkkWMC+yZoAABHWJRX3/pv9WahbFyhInddJ8wGtedOTZzlJU3rbLaDUEkWE1gHT4S+60Deo+t6bDc77HZ7DINosK7r64qzcL5UU6vAqzklZJCv97zWXlZwTCWT0Gl9pnfCXwMBoRuQEzRT4Ao7xLRrO6lVeOXvYRiwGXeIMeN0mUTw3Lp6i5P4hEytdjWhNU22Npf1OwZPAUBeaUTzK733SkbwV5GrCZwdw8XnLE90pdWcre6WT3UtqSlpYahGEQ5rPyArBcjOZWAmoJpRI9frouCcpepJtKRQfIvbQEDX9wihx/5wKMlvcawXnDTiOp2OhbfPKih91yt3bMQwbtCZloFgga2wk2kCu08SwfDewanpuzY7YgYdyAfxV3MC4BBCB8Fora2AlPxLMENF4LTSpEyWaDHJtIAYfR/w/JOX+M53fgm3t7eFN2eTN46jAONurUm+HZJAyVMSrn20NT/fBE3ID1Ug68sYwkl/lJ61huVWr2BIe9JcZGvPq2kQjeq8IvJYh8oldE65lLgVHyJrGZlzyE3hLjOv/BxmQtAolYhAwWO320ui2HuwM56Y8KsMchjHDZAYFAJCN2Cz3WG736MfNwhNpVTbDOZjfoP9G2NC1wvijmbhtK0YdNaEceBr8tjK/yxCJkIDzKJQYYh84/TrtQEx0SyTXFIyyjK152iFrQpFOcOV1llrnIparDXNt2khO74k2qmei1VpEonQGbGq/Z7JQTAVDAa8rWp8+CqOJtXV0GoyU/vgmoohImROYjaoIuJ2cYMQRGgVPjFB8KIhlmWGpQCtkYhzXTE9RMJNk4KSgO1uj34YETptRtKYyZbxYf+KaZe8pLKrwCzXNoLhNT5VJsK3KLzCLx+ZMEC5cRDAWWCUdelfTU4LxOB9gA8BKWdE1fgmeO0YtsHAh84366xp7tGV3z7y+jBIIPOTqH0WKsJVjnMEcC2NtJdZC1e0EEsNHtxagMQ0VKYBmsmyi9hKKhcg6aoTU1LU+mMOKMp1RUOSIN8WxqsZSnHBNAmfzDSZXV9oO4DvgvaT2ElS22mbhaZvxfX1bSGIeyCNUzgnEGoLpsy1dM9+Wqo4WPAg86ey4WGo1zOsL2l3IO+CNEtxYSVkOn1aGyla/MWLT/Ds2TNs98I6kcBE4BDRkKlkFa61kf0HCJjtwPBAYbm2gmSQxMcEzcxYKSRXig+3X2BVTKqFGXklPyW6bM0I5YpjGRZiAF7OuQBv8n0Rde+9ZSHqZLCE5ERr6KJw1UJAUL+mpUWXlBQzkqtRYdBCD6kDTBgG8VWGccB2u8M4bopmtOtYCN5Gy/ZZMT+Z4b2luDyQZdQyV46+PXtLtgTLMS54jfIU8sgZcB4hdGXcylhq8YhFd/V982tEu0pxshAYY0xSZdWQEHa7DWKcYXanNelF2OTpi0QQVR8UH3zTAoFrc9t8S8fww+NNXyrwTWvcLngX4ByrGVInNNZEd1nFrR1fXZzKzXu9UVYBu27Kcm2mgg/oukGqelB9DhvMw+FQwNPW+XYuIGcRni4M2Ix7eN+h6waE4BFCBYFNoIC6cJjFtEmyWlJhTIxM1echEuGjRlPIfVfcqEAcaFNwddHmzOh6mRTvG8YDoLBB9aOMnm7aRJD/rVZuyTh5cjhfhDfXadkfc0sBaubIgmg25aOmXxdvK0TFmuVatng9b+34F7NejoVaLNFiaMZex6LTiTDbzUiUV5PUYj8EiDNPVdOxmVGuk1HyelcPX6JKIvjg4TvBhZBZtVotUdtst1iWpTjwlhi3ye66Af2wUc6+aRwTRK9atNJb2meyMF20lSTEfR9kcTgCHEBtHwwogBklhUIWiV4NaOYoGQySQAgsTFxZzEHXp2jIGuGiOtdmGULAzc0dXrz4BI+P75HTjOPDUibashDQhR0Lx60FTGVGMymLyIT5Soiuo+dWAA2Qrn+jWLdGGmE20zBWNGSLUCdfzENhqLZZgManMtttDrG8XFFuNqF2bQfpSrN6eJ0kS36DCD44OCdcq2Hoiwa7JvsZJXq332Icd9jvD8W8mKl1hRRZ77elGZnglESyCrw53BZVrgZcJ420eMYWg0Ef1kdDil20i5ALYDjNe1K9H98hCfSKYB4HZ2SWCihycj/7/QGb7RYxRVymCXO8IGtbp0U77nCWcrp5vqy0DsoVFYss9ZtVk1UZuQ4YrrG3MhKNoLVvWzCxVkY236HFRExw6Ip810Zn1xBHkcEGd5LKzSxsFRaud9ty03tfqrOdZgvIAT54zTG6lUDYKrXBkc89NttDiTjtJUJpKZvaLbEV1FajMqO4CYD0YRNKtrJIc/VlWgG1CanZA8lbOgc41wHk4PX5KuxhAYN2/Cn4ogOz5E9hpg5OcL5hg5vDDabTE5bpjLic4X3QCnqJVqWQZirz8q2BlmljkQuYuhAsULoIrJABZYu0FsLOZHhD0aao1s857emmrbfCitGqDYK9qsPr/hVr+g8XQcyo7At7AG9FqDkjJQVxLROg55aIKYgQ2Dw0CPg8z9IWSrP5xV9yPcgHzPOCoe8KKwKotG4iIRR6XxF2G7gqjMZ7A8gRUloQgpa0eY/OBy170yip5ZvlrAxW7cnRCN4wDHDBwbluBUBn7fGR84LgAsDSU4J8EAOahfuVIXhj3wfc3NwjhB7zkopgAdAeHnLOlGJxyE0Urs14ealGZ+s7BxTtX/AvfOj0ryNPM1VXste+mpxtyW9Y7jE4r30asjAJmpteRQwa6dnKtldMCR0BUfEUdOKPUM5SgQwUjr8Uh4ipDZ1OICplyJoOExEG7VMh6aUB3TCUwtwKCGdYHaIIk5igGOOqpN6unXNG14tJY4ZGR7rokhAVyResB6WdZ5b+r6QNWCywIB8kfeVxU0SAAAAgAElEQVQlgux7YXNI9yFGigk5RYCCCBJLWiqlKM+SJUKTajRpZfXs2TPs9zfYbnfSmfsxIqUFm+0epxPAfMH5fAFlBgoiYEU70ABgbQZl/iud2twlA1aJDH+rQRDQpo/kxDW/e+WXFj6bNKwJLbc9G15CFu7nYt5aU2OrpLR2aoLa4KVwwySfAaFXp4QMIfoFLdAwjRCCFmGoiTNYwXAxsPDRfAjohwH9OKLvOvRDb0OnAmU+mWonApgTKFTBK/WVgjWA2WnXHmFzJmOOeAdmi6DMB2FF7F0xwdYnAiCxAKTWAFDhFuCYIIXJGlrJ9SQUh0AZDvBN2kcX3jAMONzc4HA4YIkXUMyY5iPiMmFaJtzutzidTghkRSUS3cN/zL9q3gNDu+qs3q8KQ90Fg+Gppfh86DK1EW6rkJgBVzpPO+n4xwC60MPDSejb+CA1oUyrNM0QahUQEWEx05kyXJIrFT+oQasBRoyzVPvoyokxFTOZc8bxeGxQbmCzkV4X2bA0R5hjlIGAQ4wL5lnap7eAq73a3miWAkuJpd9HAVolUOBkxTR19TJntKwFOYZLKsgGdxxHWGJbzLIKOIQCTgzkmMr1gFQiOSvqdeSECTxI77UYI6b5IkDxEvHZp5/i5uZGLEWJglvYqUW9muDLBM6t368vIzbac1/7dx/ayetz2Lgxs6SVWnLhtttIZj9luK4H0rLSWtcnLf6Zc0iwgUQ5JqeMxvyvXjFGDQCEFx85FuGw1qCDMi+89xg2G/gulIkzJqskkC26q8W0FfOqlCUocdK7ToQIQN/JphNgCwgYRAnkCch1wMqCW8EPNtjVJ+tChxRlIiwh7r2Hg91XbZDHnCCQJMFDesp6DkhRMiFjP4IQ8Fc/+6lE/46kMgqM49MJn336Qu6Jcw2QGnzs+kWAMjfWc7l+FhYLQJLua5UCAF1UH5lQGwlXmdEA4Nq26d577Kwt5tCLyiVhJJgW+9iPCal3Hh15OH3AnDMihAlrNyrXE4fZePfWA2yeLiVtZGtoWRYcj0eBNLoRm3G3EvIYI+bLpBhaKHSgFmiUgTHmgGorVsA5M+Z0KQyPZZlF63DCnGZkKO0pA1k13vViu6Yot8GFvW9RfNWqBq1oloClOV7Ki1RIkQMyo+t6PHvxAt//lb+FcdyBXNAF73A8X4Rp3AQ7BNtgQjZ3cFwxOACAd9IpGwDFDMq8us/2uaqWrXiZEDVbAqfAKG3A0JZZqtWrxQ3b7RYANOrTwlo4OKoA5rWUrl5ZuioXyIIAj8pHYgDeB93EQTIMcm2Z4JSl0UlSMxVCKNz+0+mE7f5Og4UAIJe2BLZil2UB0do/SCkVmCNl2Sug5eQDWCH2IjAijJaqccGjruJcFheg/P9QeWnMkk3IzAiharFpPorpKL1ixR8sfi1HOHYgHa+olU2h67U96TO8efMKp4cHxOUM52UxPz4+SgRvwlCg/pXIfLAAPKgENSsw1pHUG5Cl44C271k7BhKkYaUyK55Zd54J4ziu0i19P8BrysW5gOlyBsd5BWFY+qGtzTRTstqEINdWBlA/bhgGpcJUG55zRlwWZNUmhnNdsyfGcdRJovIdZmn3VAtRPiz1KvsLdHUFZmS4rPUJ0e5DBDIpxsMkGiVFE6y1Zqx7SGV4X8fhcrmgHwcscZIoXIOYTgtm6mQI+8N7D2L9UbMcgvLfgsd2s8f5dMG42eDlZ5/hm9eEx8f3uL+/w8O7N9LLbJ4+6DQONFqsrR0xYcwojOfy/UZTOzIsTAS3eqb6/fJlWmnLa5wutCsSAJwP2Iw7pJSw3QJPj+8xnY/IKa6iMxtoO6nhUtXRls0VpCUoF6wpJYb3WUiBDTfKJivnjGVaSqsEKR7usNvtVqaHcwVp2THGvgcjS2Bb2h7VxSM+lzShs6gXLoCRtQCiLhwBRxN6Ly2QMiVpLqx+VNtMpqVyt4vNFodBKa2w25jJ942NIbhaPbZCBKHr8Gu//uv4V//6LV69+iukRZrHPLu7xXR6wGYz4lHfa5+ZFEbSwHgdBNgcsmQeiOrnZBCF2HFJu7E6+xZum3nEt2nPKnCBHKHzlr902G52GDd9YxaAJ0dYLhfVbgnmS7RMjVbzyPcAcoxpyUIq1BL8EskayswVqmAGliUiLVH78UvkuT/cYrs7YKfm3NpGtry2lDJCkPSUDiOct4S2OLIxRd0JRTWwk7YG0hrK10FLSQqNUwK7APisQroWHhMu08rmYnRdhxzFR0SWjcNaQRTQOKl/Kh28ycn2M0iW0ZAUX+c8druxKIG7u1s8Pci1Ht69A1JE5we0O8C1RFJmlqZ1Sgtv9nsQ08hcqowKlZvEl+OcC4hehI9VaInhraUVJJfdClqrzQJzQoYVnoofsSy6Y0c/Yhw28I4wjyMulxNisytb65fVlUnaCC5imk8Az0gEJEh/L+9qIartRtY68eIWifk7nU6lM7ZsXzMU8p5MqgYkRBrtEFJUv4ehCeqEvh+gjb4Q0wznJckeXMC8XEoaCyQ5VRiUAOshbYvomqgJTNMF3gektKDvO4TQS2TmCIyo2r1mF0wDViFQ2jlkYy2ZxARiWYSk4/Xd7/4y/ta/9zX+dI6YzicQebz+6itsNzKGornjChivWhwgXtMVi5/MBsvIrnCZ7D0ICYBblVUamYJASHpGzYl8K/gfzuezSr+asyhFFPO8YBxH7T8RELoOA8ZyIkPRW2qOMSRsNzlQ1vZMTtp1s3Sfdj7BkTAScqMZsvYIiznJNjgxoR8H6dOv3LFe97Q8n8+a+5Q7Ej79OrqRJJWa86baygTl6ekJ/WCZhmaQyAGaMqpmuwY8a9gnA5RFE+UIRFk8/RCwLAkhdCUrUFNCdavFlCKIpRCGc0Z2snMJUVQqum69OG6w291oasnhcNji+PgeS5wUZ6xtutrImvS80AVzXT8LmI8lm4sV38sE1OhORYnUZtUmfeZzm9C2CgcAwuVyKkJiu3ewgrCX8xNCCNiMW0321obBVtJmKQgzlzlHhKBRGTn0XYclLmB28oQ5IUVp71RbK4k5y2kRDMbMoPO4u3sm2tRL0CD05Q7jKBiTI/H5pDq8ppfKas2VnSr37AA4bLcb3QjMwZO1UReN4IJHipX9Kg17SYthnDQCdur4luu5Mm6iyefC9QdM8xmo3RWBEDRC+4q43Ai39PgyQen7EZ++/BT7ww0up/f43vd+CT/76b9DzFGCJkB3PSl5CVCuPXizhoptUx0TNmHJAJ4ZFBnW1I2Za+ag+M4G8daggCFjQFeCVjTZdJFwGMyIYEzTRVUlCY3ZB1ymC4Z+wNAPyHGpoB8q+9Q5QuYoiPsyqXZKuqlolDIyJ2Bp3f4vrHwHyblFabgXZQIPhxvBxqhqJfH9rKSNwDlKV0KysjHbsCo3k1lbLxAcTqcTnJMMQxdqMDEMI5Y5fuDflIwFZzj4VRTNbLWXA0CKfWXVjU3Ng/lstiBli8QFxNrWqsyfOOIlgmXSApUBn3/+Ob7+6heYpsmc6hWyWmNKFRZeN0zJ1xFA0UcqHA2A3YpU8cXKe/VKlqUs52o0KTMjoKHvTMsMkDT2JSJk55F8B7csSDEK1147DdqE+SCDQ0RIOaLuvcMAazoIDKG3aC9VzrB9jlrzlWwFUhvFeTBVUzyOoxIOnSbXqURRBhhaAGKC1S6IlBKGYcASVSPr1i5934MBTJe5wgBY06OJSDtBWsoFxZckqpXodr5rLV+yI7421pPKJQdnW/MAyEmi8gKLaIbh/v4eLz/7HON2gy+++EK1kBTSiIBQ8ZFYgdiVIJFkMFrnqY3ufaPlWp+toP0F/jBdpoAyCIVCbMJKNZ0XrF3jOk2UAfWXwOYMJiSnBa+oobrPhj1B+z20N56QGIBGZlT4VA6cE1KMq8Z0tiJtqzzvBa/b7gTlt0Ajxhned5jnBd4DwQuTwfj45iuaYLW+lHMO0zTBaMuuk023GNA2CKl0+yayaLVJKZVd3arPY5kMu7++HyVIcITgrCyP1Xdbm3Q5vrYKsOS2nbdqbad7mTu8/PRz/PUXfwXZfxPwoattvWxhwKJBU0HVi4JmAdqmdq3T3qbMiOp8Fm1HpJ0b6/ull4UcuDLHjo1guCryFCKF7Hu4CEYWI+I8IcZFdvTVIlvZz3vGskwlNWOrt7RmN1dRL+ydVM8wJ3BOdWAUf6FmNFLOZTNV4Yd5jdgUpNXJysmKV7JCEjXSbYMT+1sAVOHhCyvX43w+q28hzq7swlE3tW81r3O+0H8cuaK9xcxFgTz02UsnIWe9Z2XhWbAl8IJsvmqdh6wARVI2spUiEWPoevzNv/mrWhUvfLXtdlu0kKhC8UWrECheRkItN+JDGWUd9zL/aMxkI5jt70LlVpmSyAFtzq21UAFYh7Orahz7jIHEtbe9jk5ZIUSGYuPDF6lWKCZGhJhJd0lrrm9gn++8Mm1lRY7jiGmaCsV6nmdsxhFSl7ku/Rfz3WJ4rgCzovqrX1QAS5CaWRMmE3ihhJv/VD8XDpz3TX8ymIYj5MRYENEFhxCqqS2usy1kzpLFSYuUAhY2Ra3ot/MzCz54e3uHn/3sZ5paE790v9+VDS5Mi/EHk9GUB5LBDlV+rlt12icWzJDha/SxL3Ld1BVrrQjUli4rAWnD0HIbTd9Rwa4YBvTajZOepzqOTgXw+rKyuZP0YZBaz5oPVCDUubKxvWlH2TQiYplnxLgICZBqBsLMkF2t9uyqC+dymYqGk4BAMhzLEgtVxzAxhiHvGUQBts9QSwywJV7frxkA0t6vy7IUYoBouooxRo6A43KsjX8brLS+3H6/xzgOZdyd1gM4cmVj+laL2PEFoVcznHKSvRmIyp7nzQElfSTAq/2vasdW2DJaIVVvrRGoUH+vN9hqtnYFrvCR9jjdHuWDtUOMUoxJ9QZUpwMcdfUFeKdVQMSIKaJ30s8CJI3vjC3y9PSE0HW4TBfxg6KE/hbh2f6b4FSyCpamGYYetTW6V0G3AhJLC2n1FEMCHA8Q+cbngQYlQSEd4bI5J71zQ9chOGHIsmZGxJci2bRCAUzOQrHOLEl7uyfRklyAVUb6cI2C4UOPYdzifDri7ds3YqLVFNpIswUA1jpC0cCiwTVsNJuRqSl709RAmbYipfK/GmHSSsEQc/G9s45VCQmupb88ThMMyCQ2UYaqfettbyAemn9FXqtTaslW+ztn8e88EYLiUM5LEYZttpVSwu3tLYgI0zRjf9hhibOW1AU4bz33A4LvwMoBM9qQ0Wn6fijPYzBMzhkGSK+eialoHKsSFxehrcZicNO23LQYKYU9eK/PImVxhh1KZlqO8Yr7eW9FJ5bH1HSV+soxRszzXAKOm9sbJSEwvvzyy6JhWE2hmEDSzEMbEaI2DgbUdzNZUStFVMilZn6LBmsUmZlZhiEObKZMtKFG/C7rAF4zKtqX/E2NILY6i8pF7fiPvVq/ixqhK5MFaejiQOAkNYt1o3WUpnc5Zwz9AO877LbSkGXcjGVLaNso3mg6tRVmjU4tV2rpsWVZZAvp5j7b7+bGoWXO2r7AgE47RnKAJedLrAUedcH1/QgBbT1CGOBDj9AN8G6Ac31x+E2zWhBQIIXMmKdJun6HDjkzXr58qQFQNYVyjkYiIIwK23ALzT21eU6DP3LOqz1N2zm0+4AplSZQK+aVa9gDwPZWqvSTjwmY3YDZ2mttJSGr4GD1b15/B6IdvJMw3ZxZkdkM5z26LiAuCeBYose4LOBhwNdff43vfOc7uFwumKYFn3/2HczzInABSbWTRZWAcOL6YUTX98gsTNh1Ul8E6ebmBk9PT8XnucbWzNSymmIp/tViYSfEQu8ZzndgOPTdAIKki5ZlAYPRdwO6vlPKD+D8IEIprSEBCCXboZJHq+kkkAvI2fzRRavfF8xzxHYrMAsvlRlry5hLYEZF6yBX/+t6rhlqUhlIxCpA60o0ZKUo6fMXlWOajVHwOtJgxNnFWom29z5mPlWK1p+35uNbvm8ApjmrAAozALBGI4QQdCsZPd/lcsHxeCytN6Py+bdb2YnksD9ontR46TZgKLv5Bj/A6h8rE0K+dT6fcblcStPkNulv38uNybS0VHUDCEwOcE7SRfBq2gSLm6YTYlxgEAFRRgiA8wxh587IeQawwBHKtj81RcaImRGz5FoFevG4e3aPzWaLV69fiSA5J1CGZkE8ETo46XZpvhih1DWguEGNL55Z9kVqp0w/K64S6nsy31xKIk2bAqrdliTM2/WKzR8VLBKdX4TRHNqVaeS2dOwqslGPwGr7Sq2nqtxVcUqnvokeG0LAsJFGKl999RUOBynoPZ3OIBIhZGYE1YTXjX+NuWpcMbvneV5wPl904lLx4Voz+m2mvy1IFsDVwyGDc0TOETFOmJcLpumsEfGCZZlwmZ4AygJELwvm5VLglJxZkt3xDCCVe0pJUnPESavUGX3fYb/b4z/8D34DwQekmFamrQRrGoFmQV81KOK6+0nzKjw31VjJsE6qi83wCfPzbH4dCJ6k3b4pjQRGdkB2qF19PjagK5PXaqnV/Zkl9gDSt+AoVfJlmzqpj7SIiiBgZj8MipZ7Kdl3hBhnZO1Wk3PG4XDA09MJIUx4/vwZHh4eMI4jzqcn2CaqhqdlizhLrCH3Zo30cmYpJ9Oqq8vlolVGlQhgfmAbKFgNp3dOMU81USSCEuMCzhGcGJwSljihyx7e9Zh0DwE4Qt+NcBCWcEoLUpoETOYMRwuksYzc/+V0whJnnM5HpBRxPs74V3/0f4GgjNuYChaac4YDVZIkObBmKqwdVpnXq1cp/iASv4tR5tRaK3xwVNacrnNgMmGtghjE91nn1q6FTrAnjVhY9/5hi2YFjuAPL90EDEDtNQrZPY09NpsD7m5vMQwjLsuM9w/vkXLGJ59+hvfvHwAi9H0nHDTncHNzg2laFONKmCahN1cHPup4qFb1knjuhx5x0d4RKjjTtKy0twi8kAitiupau9v3AGUSN+bB0lYxJ8QpYro8wrsgKaJiKgnDsFUWBiDaXarGyRPAVKPjnHFJp6Lp3j+8xTevv8ai5IObm3t0nUecGJ4cnr98hq9ev8ayLNhtd3h+/wyAaPplMQ2tOx53EkzkmAqV6VoTdpYlIZS9Tm3Sc/21QGit9QDpM0MgoVISF2Ozx3gjYEbfgTU2YzXWBrKyheQf98eMi28T8+zZc3Rdj/P5iOPxiL/+6hcAOfgu4Ob2Bp9++jmCk16vRB1ev36N7ei1P6yQ9V6+fIl377/GwwPwySef4NWrV3Ba/SwTJyYzJcG1s5rCzThgmuQ+eq1If/fuXUHXY4x4enrCdjuu2nDKudYN8YBaLOxcQAiidUGyQx05h8vlgv3trQQ8FErjY/NJC5uV9Z4hzV66Qbt/L4SHhwe8ffsW59OxuBaAw+3tvfzOwOV8wU+Pv8Av//IvIeeML//6S3zxxRfFr5NOiFLyRyQt63W160a4lYdmwhKjtEEw2k8bQUIF6xoybTFUKYlRzX93s/1dGUiDN9aiYpGlhdak1dKwCEYPJt8KFBXMZ1kW3N7e4nA4YJ5nvH//Fo+P7zBNZ91BLcMHiWKmacKbr1/jm69f49nz5/js8+/i/vlL3N/f4XyZsN/LHuPLMgutRid6t9thmSccj6eGBSLVT1JjKdHaze0t+i6g70eM44C+78uOJrvdBtvtviD3IdQ+ILZji5lJg0uqj2d+3oQUF5xPZ5yfztjfHjBspPs2uaBwR4UOCkN4BQPVegPnPDabLW72NzI233xTtE6MC96+eY3Hp0f8+g9/CALw87/6KeKylM5MNn/y7xp4MqAUgO6TtabSk0aJDhD6NlWqVdF4RQquXKtWLojg7253vyurqva7al/i9KOGpGyRZ/slwDY8ELUrJ3/+/Dm89zgej0KlVoapIerEHrIbvPllGSDpqvz+3Xv84ou/xs3NLW5v7/D97/8Knp6OePPmG+z3e9ze3uLNmzdwzuFwOODx8QHHp0cAFfdaloiYhNj44sULbDeb0kzFKqeISO/N0HFB3NuyN6kNrUGFJdmtuFiizSymaZ5wOT6hGwcMmw3GYaM9MkYAwDBUvr6ZqJp0D/ChQz+MpY0Xs/bQyBnbzRan0xmXyxkpLXj7zddw5PGnf/ZnWOYJgQhxiSpQNTXYQhBsbo9GhYCg/+VedJ6dr5uieucKRFGAW2ZYjUURgkbI2pe/u93/rqyedfXRStAg2hbFwa0SLF8QA+207nAcBTh9eHhYtZgUkLLRs5pechCcDETIWbRmhvDT3r17gzdvXmO33+P166/xt//2b+L9+/f4+c9/gc8//xt48+YbTNOE73znu+iHAfM8oQsBUSMuRx7b7R53d8/lvDmV5sAh9LIBa4wYhk1heVg0ahvQy9bT1iutmlvr1gh4xOWCGCdMpzOICIfbW4AY47BD1w0AdI8AqjQh6WYk/pnTnYFJiZQWceeU8PjwWFrc397c4Hh8wu6ww7/+l/8nhk2P4/ERvMwy/w30wSzQhoHEAMpWkqapDI5iZu2zC+3ZYQTPCnkY+i/B6VUPOxNd4prHVIXkb9VctgUO1xgZgVbN8T6QVhU676XlwOVyFgpNA2xW5xk1RVFiYvnXey9bBqaInBk5OaVuM77++mv8+7/2a/jxj3+MEHr89m//J9hstrhcztjvDzgcDpqXlCXX9z36YcQ4bjEMG3RhEH9H6zKdCwidFaQEbLdbPD4+qhkL+uNLFkHoQVLXYH31rQRwnhekZcJymZAzsNlssd3foBtGOHLowohhFKyu7wfllAlOxiQNZUiZIilleCddlRat1np4/yBFzpTx9u07AIzj6YSf//T/w9t33wC5MmRSkw4yANY0EKEgTTVKbP1vM5VqvURIFRVQbr+Bx60gty8u5xKBJUfwdzdiLk3STSjag0v+Ch/aXtJeWnKO1ERj1Vdp4QOwJVBbTZjVxNae+CBfsZqc4X2P169f4wc/+AG+971fxqtXXyHGiB/+8IcAgMfHB9zf3yFn0g2+ZIOGvh9U4HoMg2xlaL0tQif5Sds7oBS0KHXIgoO+78E5SstR9c1MCFOKOD09ImuE67seruuw3W4Rl4i+7zAOW5CTfmyAdOsmLUiOyyxlgNY3TAFfy6kejwJZKNcBr776Ep0nMDu8e/s1Hsy/bQK3MuEqYLaHaPs+NRJR57LpIWfaFiTt7plLXrOVj7U2QwkUHItFYlKczAa43MS1c9b8+WEmwAnD0TVOY6MNeaW1Kqq2GhDVbjlG9OOAlBLmWfaChKZ7xrHHi08+wePDA4L3+M53v4dXr1/hJz/5CQ6HPe7v7/H49ASpiL/Fu3dvEEJfiIkAME1zacFgq3EYBgTl3u92O3z11VcAS/9aoxrlnLDMM4a+R6c9yELwWOYF87xgmU+4nM7YHu4Q1JSmVN2GJUWMWjXf9z2YHChYxEagUCnUrG3hjcNPRHh8fIR3DsfTAzwxTqcnzAvhZz/9aWlQcz13IiBcdpHhRqCcqrZrZdJOuL0t0Xku1gyt49/ITdFsmcEkgBexlOiHFgv62AXl/aS+mLYSV0VFJmSwOvGmSqi95aK91vvyWOK6njBimYGuG9EFj2WRSGoct9jtdgCkUPabb77B669f4/nze9DtAT/7+c9we7vH+/fvsN/f4pe/9yu4vb3Fw8MDcmZsNmPpvTHPF+x2O0yXC+YpYxxHbMaxMDFevHiBd9+8LR3AnXNwTEDM6Dah0L5TSjg+PuFyesTx/XskIgw5wvktuqEvnLigHblzivBezKdXDI5DJSfID5cKrr7vcTmfkeMCoozz6YTpeML5fBZe2c0O4zjgzdv3FUpgmYGiW4zsQVQ3sWV12htFUQpcCpuiTJGaylqVtI5R168SPGRB+rNQcOEPu/F3vy1H2fpRBAh8QfWzahahuNm1/2VmtNZB2rnqNdrftUdaFt56TAtSygofiAN7OZ/w+PgOD49vcZkuWJYF3/veL4m/0w04Xy44nU64ubnD3d0dmKW/xzhucDgcAAjKv91uxeHX9u2mgb33iIsQI512CUIWIuRm3OrnImTLdMbTu7c4PR2xORzQDSO22x2cC9jttzAAehgGhK5H1w8qfIqGs6XYjDJuboRM6nQ+43R8BIgQ5wVvv3mL4+lJ22wlvP36FY7HR0B3lGHmZlOPOk9mKhmAbwDkNqMBQBWFzXULrTSTVGxpmTU50jUWSv1izqIBPzCX13nHD/OQKMLVKiytM/rosbXesUGOG4HMmsMEaW8G4kJKDJ1TQU04Hh8RZ+my44LHOIxgZvziF7/A559/B8OwVSIi4auvvsTd3R0Oh5uSjzSn3ftQNNf5fC69a5kZXeiwHUe8fnzEsESkGMHBYdwMSDlhmhRwvUw4PT3izTffaKRlNO1Om/QBISgDl4FRgwdyTgiQKrgSUFiqS1nnIByfnvDmm6+Rk9RVztOEuMyYLhfkvofXtltOAc6cU3VFqve9mjNSv0o6kiu151vy1UV+PoKdrgROXR1OXHw9+7ZX31JKGr4FumgF68P3U0F4bYOAlS9/da5vy41a5AJyYAj6zSQ0mGVZMAySSzwenzBNE6Z5klrJrgcUUT6fLvji51/g8fEB2+0e4yjFwO/fP0gvjf0ejhzO5wne99hsdri5vUPX94gpYZ6WgoY7J/uhbzcb6f8xTcrrkwwCmBHnC85Pj3j36jWOTycE3b9J+GKyglOccJkuSJnRDaNwx7yHJ9UmQInWBRLZlGBlWRY8PDzgdHzCNF3w+PCA8/mIfuhwe3OAd8oMQU2d2XxbbziTASMxgrnuyKxJchOKVrNRnRxYE+MSbV5/JrgXYJT3bP4fo5X40Dri9nJF33C1Z8oIbWLOqpWMZt3CHrRuwtICj62WI9LmvoqT2Xcla5UxzxP6occyZ7x//4RxHOBzxvl4KoPQdUm1gPhgz5+9EBD4dMTbt2+x2Wyw2W0xXRjYRAMAACAASURBVC4oG7eGgMPhgMtlQlwV8wZsthvE+YA337zG5fSEzThimaUvRF5mnC8nPLx9g3dv3oDh0fUDuq6XHeugfiY5jJstNts9hn5Qs6T4l42Rc3CsewfA/LIFaVmwGUfE7Rbv378vhAKQUpFYC7tZuV0aHBUyQLP8GY2P1rzaiHL9gQpR6Vq8Ljppj9XZhyhmLgwduzJJT1LdLEK/fG0eqWBYH8aGBOl10VKJr7VVG0xYaC6pCl/OVo5TX8HaJdm1pumC8+UJtzfPMQxeOyGmwngNIWAcc9lmhpnx+PiIYeiVuDjgdD5jS2K6LpezbqMTVNBuC8Mh5yTtEzrBzY4PHZ7ev8fQD+jCVFb9Mk94enjA+XTC7fPP4EIPBimo24sPuNlisztgs9kJ6OoJHLk8t9RpxtU4pZRwuVyQkyT/j8cT5nlG13d4fDzhyy+/xKBa03mUBoHWcrNObitEXAMDVN+zgGfG5bcx/wiyYNmD1haywhnm4ki9wjo1VdpfiEpdS3WVRhR6bRHs2nAfNTAkLW9fMxY+uFtYiVUd2MwMjgui9aJXsxUzMC8iTM4TLpczDocbEHV4eHgsLAnZFifCeunnzFiWGXFZcLhx8JsNuhBwOYtwySCydgZCCQxiipLO0Q6PNGTc3Nzh8eEBp8cnBN/p5BIulxNOxyeAgXG7xW5/UC0mK3oYR+wPtwjdRuGRiJRzSUY776/GqI22xUIsy4J5nqSvyDLjcr6As+R3N5sNOGfM86y+5GW1yLMKV3HdCYahoiqJxkyCm62eGmFqcTRu5KMIIn8ok00QZf8Gs9lESpVlgB2tJF5sNRVzVhx/mOBLSPOtDmR5iWSnXBO4bR7UFpNUMdamcd57pLjg4eE9AJaO0OOI4/EowGgnLZus96xp5fPppJx/MWeyJU3Gu7cPSCljt9tjWWYMvbA+mDM4ZilS6Trs9gfc3z/H5XxGill3skt4eP8el9MJN7fPcHt3i8PNDTYHiSqHccTh9g59LzlKzhGcZ+TM6Met7lQs+F8hHTT7VHnvcdKO313f43x6QpoXbDcbTJctlkUYvMu8aL3DiCd6Wk2+gbBOhaf4XwUjs+9qNOmoNrMoNcIqld+OWKBVba1glU/1mWRrH3PAnROMY/VFO9laWrFahR8PGGr0KL5OXR65nIP4Y6V0DsxRHFmy0jVhORARNhs5/2effYZpWtCFXqnbWvy7LBIxdh3gCDFlDIMrxSVm2t69eyvbTjOw324hIYwFNBk3fY+UIv76Zz8FUsKyJCyXC9K8YLfd4/75C2wON3ChwzBssNvtBMLwQTRJToXLXFoneF/6skI3tyAXVj1WrUp+vlwwT7IRxPH4iGm66BaFsrNeSgmn8wmWQpLuPS2o3sAKjda69pGLWInGWWk8RnVlrrVWUQp2jvK9KnhiLhthMA21NnltUFpvuNwaMYQRK0vm2qmvEi59tYr6BqTB2gcQCZV/QeLLSKX4IhgTEZ6ennBW87fbHXC4uVGz1+Ozzz/HZZImfpvNBptxRPBek9wD5mWBCx6sfo8g6+LU932HtETs9htpn5AZ/dhjv93h8viAaZkxn89gIuwOdxj3B/TjCOc9NpsdtrsbDP0oUSgyEjJ8FzTzoJVHQuQv40zEpfLbWnfJziUJ8zSLWZym4oeacMQ54u7uFm/evpLNOJpqMzQTfQ0rtfNSP7gSngKGXn2X1n53y/m341pGsb3CCphrfK+aZFVHEusGHVbQK8G9L8LxMZ+sOvK1Jx998J32Exsg2SHEOhXKKg+lt6xRffphwG/8xm9KS08VqNNJnGbvHPrdTgHRDqQFuDGKtksp4fh0xOl0xN3NrWi2mOGD7Iw7jjvc3j3H49u3eP/mLZgZh2fPsdlusRm3GIcNbm5vG0eeYWVyFsVaQxZQU7CDjMiyQSxUe/kQMC8LLtMFzjt0Qfb1bNtNzfMs/Lkl4hdffLESMEAS0wlqlRiyHQ7MfFbXZJ2K0oQ2Sb2SkT9bX5FJG01cuUQlnWRRKK+VSxGyazt6XYMpDrIMhpU7WSQi3fnwwWpoz/eBr9b4eoABkKbUuFRXmyEVOo3DMI5gRmFFbDYbjOOIt2/e4E/+5N/g/v4Znj1/gc1mi77rpJo7yF7eMWc8vnsHZsb9/T3GzYicMtIiG4TFeSnpk5xZ0ki+wxICdre3uHv+AvPxCegCdocbbA57RN2CL3SdBkTaToAIXd9LwxbloVnKhyATb8lmWKUVhHXBWgaYYsS8SG40KQO3FY6n0xHsMvreYZ4SUmwCMX1lw69MWaj/nJsArQhDgSFaa6IChuqft/ewNo3Q7bWpLKamdZRNtNOaOl4JRjV1GkGi+lAfN68fvlYarvy/lUqbIIs8paOOlK9JC9AUxVxst9uyEG5vb/Hll69wOOzw5u03SAkYxy1ubm4xaDt2R4RpmmRPJ23vdLlccHNzh6H3mBSpzxvRymJWR0H6M2PoByycsb3Z4/nLl5gzY9jvEboRz+6fY9xu4IMkxUMIGDf9mvVK1YTIaqojaC3MM0To5un/r+zNenVL0juvX8Qa32lPZ8p0le02nS64cKsNtGU3FgIkREstcYXgAyDRXHKHEHf+BH0D4o4b+gsg07gFTXULWYBayCqpJzurypWZdTJPnmkP77imiODiiYgVa+13n2pWaufZ+33XECviiWd+/k/H9u6B5nDkl998BRjKqhSC82k/19fX9H1P2xzJnKXte0LTL2lvLRwrEPE5IJXUzTDqalMCmmoyaqLfPVpbPBaaziYMKxz5/OT5YYxJ0kQSjpf+HWX2YwPhLBdjSpDOBYMgiMxgfYXdZpJOHJIbb4zlzZu3rJbrWBjS911Miy7LmsvLC19tnVFWC5xzPqt0wPQtzcGQ5YrSux9QAsacKYk3aiRp8LjTLFcbTvUOrGNzsaG+WLParFksF6xWS+lc54H6qqJEZWNhTnizUuuY9pw2KnNGwGSkQUbGYAeWqwXGdBwOe3b7PcuyIluvORwOrNYLTqdj7EkV1k6YgOTyj41l07lnoqifk2LnfpeVdskyj7lk0YCDWNmUujEAcu0jQs66R7V4gM/eVF6pZyTnWeOe+Y5IBzPfNfiJP8/5ZKdrrSl0Rms6yWD1le7WSieSH/7wN3j37h0oR98bfvSvfcHLl5/zwx/+kBcvXuJ8ZdJgpCQs89DnNzfPGAbD/e17Pt6+ZbO5gqTuUynBEBsRtMVvtlytOC2XdMagc2nVkxeFbxNURk5WFAUOqTay1pJriYTkfoPkWkdL0jqpa0QJV+v7jvuHW47HA0VVUZiCTBfs9we2D/e4TLFer+n6XohyGHwlVAx+ECu17BhLTu33YC3OIQjmR6o2zZd7GsOWI4SuQnFySmh59IeoKeeZcqFQnW2T8yKOMqHaPu2fGAb5iMBScpqJZsAnERKhDJTKefn8BR8/fvRVQTl/8Ad/wHffvuN3f/evc319xTBYvvnla06nryTPfjBsLi6FGyV1k8ZblHW9oFosePbspcQuffWOzjPysqQuSwrfzMKUBRlw2m3JMs2qrtlcXXNxc8NqvY5iOGTQWmulX4DOyHUGHh0xJELawUSrDW9sGWPpO9HD+rbjdNhz+/GjiErTczjsWGUllOIGMdbSNEfKouRopKKe1FtqR9AV0aG9mLaPiWO+HoH4JlzwzPrNC1VSuglHSE/P5yc/JgJkgMHiiGatjoO3aoQnEkIZ+w89RWDTz9Jyee3hRxUGiTN++PDOE3fJfn/kH/7DH7NcLrm9+0CWKYqi5NVnn/Pq1WcY4zjsd6wvLlmt1pRlQdM0UR9zzvkM2SVtfcJpJ8Ho45FSZ1S6BONwOijEmtK3AmyXK4p6wc3z5yw3F+RFQV0vYkldSMdGCzcUi3z0fYleliDm+Dl2dkTuaU4NXduSaU1zPCIOZMNAT60KMRqM4XRqafuOXOcYF/qCTudXQeyxeW5t5wSSisG56JzTxzxZ8ZGO598HQKdUe84SVKSWoIJAXMqnrITU6zOyPbVQ0pebvuT0ZUPSnrOW0ne8zXJHlkm/oT/8w7/JZrMi8+Gd+/v7WC52f3/H9c0NVzc3XF5eij/MjRCeFxcXaK05nU589/23HPsTTom1ul6vOTYN2+29RBl8TySVa7QCMxhy72Kxvp4zizUCIziK1prSuy10lpElKkLqQ0o3oRkMh/2et2/fMgw9i8WC5Uq6rzSHI9pHSA67Pc5ZTu0B5STc1tthann62G/0dSV4I/M1CFZlkIVxbUB6TgW/mH2sr8+Nw3RN5+udi06gAjlNBqCUio7CAHHpxJ8xmrUxAHEeyecp1jxR9JNBhoJXrXIUUOUF7WAZBsswtHz55ZcAscgjtk7Mc9qm4fbje+E6bcNiWUvPJSftdLqu4+JiTdu2PHv2bNQZ8pIsK1guxcemihLjIFeKHMXgwDUd9thQ+nTt3BNWURSx74D21qwjVJiPLpgsWZBAbMHBut/v+f77N+x3DxyOO5rTSWARnMNYwewoi4KLiwtUnnF3dyfwDVYiBsZ65Tuuj8ywVWA8xwx95Z1z+BqPGBbM3GOGYHGTZl4qaFVKPTp3Qi9naCCPH7jHOllKjeEn7lil4iDP0dGc9U5dGFMCnLhRvP6nfAMr0anwhF6w2+1YrVYR4Uc62hpevHhBVQoBBFA9MwzoRY32pW1lWVJVBXVd03uId0HgGWICZN/3FLnGdA6nMzCWw3bL9v1bOBzIVmsKB4VP2VFuzAsLPin8wtqAXasVQ99H5Tms3dAPHI9H3r9/x/v37zgeDz4w3rHf79FWesIHp7H058zZb3eiR6bZF951YZy0pkaBco5cJdIoegOShfDjDR878HBRTzOIc6JxLmZT+gl965hzshHekrgzzpbD4fOGmA7qnI434XDxtcZ7OxeUT+dLxRR5UVHa0vvFLri7u4sYrCsPvb5er2iO0mOgLEva04n3797yKvuc5aJmdbEi1BcEcZyXBUM/YJRglEkrRMtiUeOsoR96MqVo7h94+9Of0334SK2A0wnbdth+IC8LcTzm4nS1IAgODt8QQ/kExgFjXKSu0AvUDIa7+3t5p+bEYXfPw919zOI1w8But6UoCxSOsq5ojG++oRSD6Udry1p6z2msCuVuoD1wxcRfpqa/Kq1wxiVR6LG+EiX1Acq5uEmeUn/Cv2lIyTlHHqLykZjCQIPXn2AUT8lCB9tZ7nTWJfG0qPSXJUaRPM9jrwZvsXMonyUb8+U9t3LW0p4alstltOyur68pfBnbfrdjsVhwsbmiH6Cuc99lxaMPaU1V1oQmB8MgsO1d00hP877l+PCR73/xNd/85Cd88dmvoxcVtmno7u4oS+k5pWrjLUopGraDla54ynn4+VZqSK24Q3pjpAMwjqbpuL/9yMPdB+7ubrm7vaNrW9FFfU+jxXKBRkWYBG0MJx+bdV6UKKVQTlE4FcFdrPJoh747n/J6VQRLUS627fZTP6o8fqMov7ZPKjyJrhbpIqnfDTQQnbEST5vKWLGAxM80fuW8pfdJ+nnC4pCAcDooYeJjLFPr0C88yHkxhbvOt4EeBjItNY3VqqI9nsA4qucBhl2ILNPCCZrTnrzM0CzIsoLGDDhr0XlOrivQCmOFIxhrMV3L/v6B1z/7c777+c/45l/+OVdlQffsFawWHLb3/Pk/+wmf/9Zv8a///u+zXn/h1QUl/Q36AYWjGzoO2w90hwN9e2LoWtr+xLHt6TsweYF1mcRY25Zh6NBKewirls16zWl/jKB2ocNJ6Gh89HMUw2BBSQ/uKBi5GW4Ui8F9khhcIfXcr7y4Vmbr+ZRPMyxkKAJOXR+B2CKRSW3dWLFCGGSkKJU4a30fxJmiN1cK3YTTBVYZ7i6/T4PlAelwfGHnBJs/t+IiCJ2Gy7Kk6zrZ9Ywm86k5CkexFodB4WiPDR+GlmOz49Q2KKe5efaCerWiqteU1YJM5VjlaE4Nr7/+BW++/gqz2/JsWVHrgsV6Q1Ev6bqGm3XNzbKkcAZlvb7jdaauazgdHth/+I6vfvrPefv65wxdR6nFD9cNhu19z15VXL38jKpacmxa0bH8a5dFQds0NK0Uuzy7vqEoC6qy4nZ7R0jsVEpJFoyfVitMC+20XypR3qUWUs4LQMIZTPLawpqFloapz+wphT6sYkqIqRsr/J10JJheEW/qd8aoFap48lxEprI6wBr5p8ddNz7usSHg6dFbsp5YrSHLxXrsu4GiyH0mhnjwVZbhkGzYh4eeqpGyuPVyydCvcMby7Vdf8v3rn3F/+5E6L8mLmvXNcy6ubsgXG158/htcbS558fnnODLWV1fkyyXqeKS8vuFqfclivSErS26uLtlcrqmvLikuNlAW0tXXCYG9f/stb776kq/+2T+hKhSHu/cYNLbQoDKyIsPSsb3b0fSW569eMRjHbrtDKQQxsqromobr62vJji0KirxAQXQeC7wm3sEqs6gT5V5Eo5suGaPHQgjAT7pYVnJlUiAUCOdTDvWUbp4KUSXW5RPyz4k+NIrIVPzJS+mzrBRv4Xj0+MCqHw1i5HwqIcYoLhHlucgrwYhA0o5TjIosz2makw8LwWJZo5SjbY58ePNL7r7/Baf9vSzIckG3u+f97TuGAXRVU29e8vz5c/7w3/sPuLx6QbVYcvXiJae2ZXl5yavnr6g3S8o8p6wqLl7esLy8pt5comKPT8f+7o7/+x/9mPtv/4JV3tPj2D7cs3U5L68vWOYa6zS950RSFOIwfUcWikS04nQ80DYNRR/CVFLy1rYNQ98LIUS/WyAiyYgJmrPoack6IPqZTop3A9/IRCny9/CEOGvIla7RxGsAk9K7sLYp55u0vXnCAIkefO/pmBGhGndL8pDRZeEf6AloSswjoU10AiS8lOdhJhy51gxeHJSFpDZvNpuIsBMcocvFAuUcx/2e+7tb7t/+kuH0gOl7UBn7U0OeaapyiR0M7z9+oFcfePtmTVbk/Lv//t+iWix59vIV275noTOWVy+oNitKpSiLnGq1obq4olhtkmb3ho/v3vJPf/JnPKsadKE5tSfA8ZffvGO1XJJZi3KKbHGJbnbUVU17bNnuH6ICZYaeoe0EKVtrbGYpioyu7yT12hqs95HNJcPcMy+czbua/H94MZhiwdqg6CeGX7DKIgdL3R6MTEL4znlPQrh//ujbuOwpWw0P0zEhb/pC8t0505ZI0fHps6fM2bDzInPMRyJ0XysqyZQ1+OokAcALfipJ8jOcjh2H3Z43b14zHB/A9FjTg87YnlqeX12Q5QNt39O0LftW0K9/9uVf8G//3t9kWT9jc3mNdlBnOav1FYvlgiLPqIqccrmgvLgkq2uUIjYsNdZgzMB2t6dVlrY5Ua4qPm73FMsL8rqmMxl1dcnpzS2XVxX7/c6rBNLwwjqL04hhUo6Ae23TgBO9VKvpRo6LHtZJxw9jvv+o47sggoQBMNKPcqPREKIxcU1mrqrJ+s4419ytMRWXgYDTgcNYocSY7hEGoVRy/ZnDYWM9XuBqU3Y6Eu04ZpU8Q9oWt23L1WpDUeQcT0dBmvaWIkqyHJxzDH3H6XBgt9tyOh0ZmhbTd4BFF3Boe1a9Iy8Np94wDNB0bayLPOweuLm6pqqXVK8WVGXO5fqCuhKIzyLPycuCoq7QRRGbIjgcg3PcPH/Jm5+9IaOnOx3RTcvQD1xefY7OV2AVXSOo24fDnsP+QFmXaJ+eXtcLqrLEOseiqqXiXXt91DeISGHgx4UNKrOQTnAJCYcb1zVwtmjgxTK2QGzeGIg463IVmpgMPZV8KjUyx3VP1jQK3pRuxxMt3jyJBubIrbxnPgEcPhdITTx8BJ/LSMf2EdU/PuQNRqTGnLIsKDPpgdQ2jVRma03Xtmwfttze3nI8nei6jlPT0nQDx7anawesBWM11y//Kn/1R7/DyVf9hPEfDwf6XtrLlFVFsVigqoJ8WVOuFyI2lwtUnkVQXwm8Gy6ur/mtL36b3Wng9qFh1zje3+7ZLNcYu6AbNDrP+Bf/4p9LZfhux2B6SSfKcuqyRuc5WVmyXC0pFzXlopZMhqwg05mU3qm5g4EJR5p6B8YlkJ9QKebAWd9dzsYTx/WwnoEkBPMkIxmjOqlXIfw+0ckeXaxAjfTtjYCUy4SM/PPFI/N7OxUyD8K0pOVwakLA49SJ/y4tpLi4uIBemmEFDNmmOXI8njjs9zzcP3BqTnTdUeAtLTgn7atzrSmqFVcvfoOvv/qXDL4Bl1QBCdpi057IckXet1i3EqyyqqIMfjUkqE0AkvESaL1Z84Pf/E1WF9e8+e6AHQaavqOocv7ff/J/cXF1zXZ3z4d3b3nx4jnG9CyXK6q6pioFsM9lCucMmfK6sBKITmsMPY6ri0vpw0Q7yQmLxOXG+SRwncDFZht6HvKLn3mwlehUDT/ApHhEefGaOGXPMZpHOf7pMQYX3LjcbmZlBMvlE97Z1B0SPGaBiFLxKIPUj69VjqIqYk7YsqjReeEhB3Lvn+o4Hg/s9jt2+x1t12LtIP4/J4HiwTiyQnBZnev4xS9+gbMpFlnO8bTncNhirbSRdk6gp6y1WGPpkHQeB2CtVE8jISJjHZurK/7N3/t3ePcnf8zDfsdgLE174KuvfkqVS4/Oq6tLtEggiVhUNeViSb1Y4pwR67LtMNaPvR9oTidQisVqSV5WcNqPk2SdePbjShHHKPaEiwq9bxcRvaHzdUuV/LjGEIt/z+ndKTz/OTdHhI6aU3QgiuQPEY2JwyWYsFEvmw+WVEENiY+ph8w9Oi+8WIh/iYsiA1VEULk8LyiritVy5Yt2HW0res7Dwz1tJ5gXkufu6xKCra8y2q7lm69/QXs80htDXS/ZXF5y/eyGV69eje4TpSgrzWK5IityjDUS4/SIPDjJzbdGiLkfGozrqcslD9t7tvs9u8MRO0jYqm97bm5uKOsKnedcXV2xXK1QOqOsakDRtCdOxwPHw5Guk35Kdhjo2obe9KAVD4c9p/1WjATnXUhq9IuJ33yiXHsCJH6vZmsUFzVeor0hQTQS1BMO2XPrnh4T6zKlxDGsNPKywL4nro6E4pk9IFB3QDYcX2M62OnA5GmRVTsFThpjBeJru45FvYhJgsPQczgc6DoJzWjlkQt9y2iHKMYWx9D3NO39BIF6sV5Q+CxaEb0niiJnuZTuKCISBvrW0Fmpz6SqIv6G9UaItZY8K7h4fsVf/xt/gzfff49SOafjMaYjkUke3mK5JMsLcMSiE+csQ9czdD12MFFfE2IvaQ8HHu7u2axW3GstWLH4KQpiUYW1cnFzpzOrAWeJmCOTyAwuKEbEcFW6nv7fR7UDTK3dR66UlBjm0fN04RUjmJrczDfkenTu9MG5N83l0qAfTF9+dCw6nJM6S2nLF86T9jjGGJq24XQ88u7dO7KiIFMZRV7FFyyKQpL+PAZGABpGZbFeU6kxmbEoClbLjU8ZGhiGjiyTFoNaCSS8wvrGKwJUZ8xAN3T0wwDGd4/z6UhlKbj9Lz//NX733/o9lssl5CNi9sXlBZuLCxGTZYFzoogPduB4OtCcBBo0NCoz1sYClaIoMG3HerkYlQ7fv8jqqSRK8igeOUqzbIwznuM8aQzyXOb0nKDOZeekBDy6MM4Qixq19DhglcQvo+3qH2rPKPwBEnPU40ZuNuec4fcMcE7gBFACjtIPrZTzH0xMFvzyyy9R2lEvqli+b62hyHP6oZfmWn0vldiZJsvEaSsWoUzgarWiqguqsuDyUpTq1WrDZnPBcn3Bol5RVQt0kWOHHtf3GOvIbQ7OYLzrQQizBJWByuj6nl/74Q/44kc/4rPdZ+RagJaVUtS1QBo458jKjLzIGdqO7nhkaAXmvTc9RZFTlxVt30XO2w49l3lNWVUYO/hGrYmbIir5BkXA3ZiQ0CMjMV2zKWd7QmcjKPvT6+bFKZPY5VPBT6/d+oGHfth4fSwMZprZel4EPn2kFo7zD7BSCeHrFJVPw5FMU+erk6QgpCYvffWPESW6riq6tqMfOvI8k8yMvIgoOYOxZLmmLHNWq3XkMPv9js9/7Qes12vBtFivqJfLCE6sM4E2CKVhEu5CeiIhWR86K+J7V/UClRU8f/EqjknmRfSdYRioFzVVXcu4OmnqKmVyVuo5M82paWjbVoD8tJaEy75js7mgaY7e+iNo+H4+z0kWn+WctBdMCeT/13oF8lJT+o26qjfwHjtjOUMkDpQSgLbw/Xzgkwck7ou5eZyGmeS64JtKXlRrSfsNRO/1vNDjUfn43tBL5Y9kLuRgQZfyLBGVS5yzNL4V4PF45Oj1oiDe63rpRYeAm2w2G3GPXF2yubhitZKayrquKYoctKZXCrTg2o7IkFb6JumMoqjiGOt6xdXNMz6+f8sGadwqRcaZj1BI5XjXtrSNYJ9prSXbohbxfzqdUL4KPaAbZVnG6dSwWl3y7t2bZO2IndqCeiMAeTZahUoprEl0pYTzpFXlT3kcxs+DZTqSQPzOeR9bNsLW5yQcZE6xstBIOpmz0jwUJv4ylMch9bLbJlwxWJ9hYeUaF38CgcV7Od9VNtEDskwqyPO8RFmHGwRV0RjDYlFTVxXKCcla71Iwg2C9VtWCqlzgHBG//9QELC9fwFuMXeaur6+Fs1UVq81FrCYvcmlaX+TSogetsUrgMTWC25rlBQEfTSlBD8p0QVEtIlz6crn0fcSFk+VFgfE6nrNORH5dg5Kw2cXVJaYf2O+k3eJ2uyXPc/YPD1z7lkIxhjmz8FPfpdc6HonJqHdlCu3xXefEFfSzVDcLKk9IOp2cHwjOSphNa00eAGoVko+Uzah3EmaC2HRTCGn0M9sZ55rL5jjAcJ+gt87keJZpD1ziLS7rdQqd49Tgi2F9gwrlGAbn/VkCP3A6nVgul9w8f+Yry1uKU0me5RwOB8q8IisylptNxD8zFnRWsNlcsllfslpvyFVGmRViyuuMDCjz53l7wQAAIABJREFUQtB+nOB8WW1RWUbhe4fj58ci3PfofVl5VlAuar59/YbNZoN1jmW1oD2esE5qLkPfdGsceVWiy1Kq4ruO9cUldjDU1YLtfkdeFrHxxHyO07l3sR3J4yNVzK0JEuX8eROnb8IQzqpZSvhoyhnzwGVCC+GQPzYONHU1jNQclfiZ8zQd/NTJmrhE1LgF0oE653zrGnF5hE5zee4tmKwkd6GeUDGiCSnv1ReCK8uS0+kUA+fPX7ygPTWoTMeMja43bC4v0VrzzetfioGSZdS19EJaLBYUZSkdSLSEsIwZpH+UcmIVKk2eFRD7hWsvtyzN8ch+v4/Pe//+jViaKIq6Fl2ulWKSFLu/qhcs12ssju2wZX15QaYc94c70GJhCiRpEUW9c85XlT1WfZ7St6OijuMRBJRSE+711D2fcsIaZyff57gRFVkx6kfzgSnGbIPg01HRIJi+4LmXii9HqslNj+CykEEHApRcKqUyikIQpofBsqxFDFk7oJRD5QoGwZYIxbyC2y/+MV3kXK+W0i3DKUrvcmjbFuVguV6R5wncQFVRFYXgS1iLynMPGep7USpJQsxVJn2RUJKRiuF0OPDhwzse7m65vrmk8QmItx/vpa6Smq4bMB53Lcu075xiY/isWtQ8f/4CZy0fPn7k1ec/oDmeyPMtt7cf6bqOYRir+IMMC3pRIPhzHC79W5DLg/ya6tOTyM4jo+4x25tbmBHVZ/JARawhHIlLRasuqO7yIDxBTCuJ5w8NfPhcOUI6qJGww3fhRTxKo8DGAErMf2NZrzeAELu1BquUoP8MI1ic1pq8KLh5tqFtpJtJWQrM+nq95uc//5k4MwdDXVcitvJM+gkpyDOpotJZRuaEyzonuK9RT8GJnuYMtu85Hk98eP+eTDnubu99H6aSly9fev3Pcv9wJx3lnCMri8Rnt6Jc1BR1BUh2yW9dXrHb7rh8vuJweJCZtM6rrh4O1LfGdgGWKi6Fm+hT6dwrJRvDBoh8v7zKK1xzhhOiC7Em/ZxK5C3MlBLy1H8fW/oyFvY6Z6V6RQVROorO0dIYfS9nOVjQ1ZRGYRgLgRUuKn1T5+94r8cW7DAMLBYeCHggpr9Y6wTa0/uP2rZjtVqz2VyKTja03Nw8RyklZXDDwGq1xNiOspQGErkvb9NKyuek9jPzSX4+8yRTqDyj8NCczovIIO6lS55Yg0PbMpiBqq6iCNoftr5MTlKVClt4rIyCopTwWblYUJU1CrEyF4sVg7Uc90cPU3DyhC4IR87ZEAAIU+43+MhRUm6kUWCsz4iN7k7PE9RkvsNmVSpZ/4Ro09y/tBW2iuLSk905mY0SBD6HpPpor4elOltKHOdcIZPzwps8ITSjv8yOkYTUAnVO+TaF4gWv6xqXZThjqarST/RA73H5Q3PV3XbrcSuqGOsUF8GBsqxEr7GK6+sb4Xw+rSZ0oRE4euXLy4iuDxWrxIXItV8YB2JFOst2u+X6+tqLMem00pxO4MNFfouitNRjtn1PYSwlkqCIEzVltVrG/qHKeagp5yMReFcFespd3WPHa0qFYRXiPo9LMg0pzUXndF29RGJa7ebCekPM4HlSFwsma/g5M9on3mJOuMFtkTGl56nzMFig6fVKST78crmMVlXwfwWOkfrbAjaFlNf1HhLUxy8TAlVKMVjLan3J5vLKg6eU/p00QTyPKeSjdR2yhjMtYTPj+5y3fUffdRwPR+7u7iOcZ5aJx785nQTJZzBSmheIwoY+7tJnqe86mqalHwYRn0VO25xEF9aawVfAj2G68f3nBDb3f0UHbqYEdUiNgDCfwkVJ12jyuwdPDJAWkeNpD5sVrUcXFjwRVXGUMtUxOS15gFJTCv7VhDYPrsakk1FP0KNV69wIZeScpFsHQsmyjDIv6LqWLCsk7WfQOCuhlqIssdYx9ANDPrBab1j4Rl6996NdXIjP7dWrz6gqARkehkHgCooc64SDZglHj1htns05JyLc9J1HCDrgzEBZL7i5usRaK4DDnnv1fU/fCWhf7v10fS/PVM0p4vxLM9icIstoTtLyxxnDqW1Y5LlYurR+A+iR65OypunaZJmOXCp4MZUWnXYknKlUCrcI6zy3Ji3SMzTUe4y+Mq/4Kzw8AL4IdEYSQQ1TSMcvZTX4pqg6dV+4x6QWZPnsw/MkmRC1vIRwEeFGMtjj8YDWGZeXl2PQOMvFI48QoxRbSDfcoiglqK+170ISxOSJi6tL8iIXzAmdcXlxjXHSEXcYBvqupapr6sWSIpdAtgvjRGFdQEk3dE3LyfdB+v77N+y29yyWNb0v22vblvv7e5/BmzFYS9O19G1LboI3Pydv2iiOcaGOoaRrWprmRNe2klqEo+9ajBmt+nNKfdCV030exLNW05rKiGjrRq4NoiYEl0RM0grcypfjOZy05skyjDUxTTs8dlZIMqXUeDOXdtcJIjWR9yqw4PN+mblpO6U5L4aYsXNSbjbqZql7QmuNygVSyjkwRvxkg8nJs4xlvaCuKgY7UFYFh+MJa6SnZd91DMNAURR89tln4qsyA8fdFlNVNFpTVRVFWXrdR4g+05nMhzc2zGDY77Yc9zuatuX+/pb2JIS6qGvatvGcVwyLvhVcscEa+mEQ562xDF2HKXJsprF5gaksxlppj22ddAg+Hr2eJvFbrRXDzMKb6ExKalgneo6b/APMQoLB6PMKqQ26KL4uIL2XaP++nXUoNgnPG5lJPlnURHk7G7sKBBb/8j6ZWCowdUmcUxTHjo4TSpvoFlFsh+u1Bie9ubVSNE1D5qHU21NDUYRuuZIZUngHap7nLFaSbaq0ou8Ni8UiwpQvlkuf0p2R5wX73Y7Oh4CsNTx7/oLr568kZ0yNprkDqUxqLUPXcn93R9OcOB6lOMQO0kJHedSgtm3JskyI8XDAOsNgZMdL+EnT9y1dN7aQxjnv7pAGEf3QowgNvBTGGrKsGOdMpYW8yttZdmJvRcnkVaOg+Y8MIRGRCk8oclJqTYb11cEsJYhH76hKfWskAXJhmdLRQk9JYLyxynz5uwKnUWrw33oUwVmAbO5jic/RiiwdnH/haeUynigF91WmMyPPcuEuHhag8EB3YiGK78gOYBgoi4Le9AI1VVRYqwhhkrZtWdYluZPelA7D7e1HmVCluLhY45xYiWVf43VpjJXuwYICJG0Jj8eDIFS3HW3Txo1iuo7jYYsxA6dTw8PDPYfDDq0zBp+5myl88yuH8cAtxhhMP9B1LfUik9BT29A3TSzoLYqCQXmDRo2iLDzb+6N8LBUBx2PUvUJE0PrMjMC9wvIF6ZP5TSXK/ZQeLGMFWvJF4jtQI5HNyAnOEJko/BJOEU4TKlymTevPhZPmv0flcHJ+2CVyRtgF2jsZwzlSOV5EKy/PC5QX17I4HZ1pWS6X5IVMUVUt6LoG5wx9P7BaLSiKjLY5YbtBHKk2KNagdc7zFzdYaySdRiuqaikcxPWYQZqc9l1H355om5amldbNpu942G/puxbtHB9v31EUJU3TYEwffWh5XtD1Q/Aa4nyYqG0bdJ6j0BRFjnOGrrPs9wdpcpbnDMZQLxaYZojoR37WHhFL1EXma+OIoMjJxMd5njvk01qOqE6hJtelngEcETXokU6WFiKknweymHwe6TE8+HHxZ6rfhahC9P6nFuzsBcc0msSBqMUyK+vKZ2Es/LkiFoZh4Hg4kOca56SUTKHo+5ZTs+X27o7nz19we/eRqytpqtq1Ldv9lt3unsViyXK5pioXHI8n6vpItX2g71oWyw5rxL1gnaVrWo6HPcfT0ddP7mjbA19//XP2+61EHbDsDlvKYkGRFxwPB8DRtkcGU5CpgsFI92DnJH2pbzuUOuGsIysynJbwldKK9WbNYb/HOUdRlZitB/KzAiwzEpishYs72YtOiGtkgkhUkrsf0rGt8TpyoCq/9lrpGEmIDe9tYATTDI40RAlJtVIUZ1ZjNeiUSPzgXOKCcASFPdQhz8zelESTHRSsjriJVBoHnVwUrd5QpZTirIYWMFYPOGdojgfJO9M6wkc5Z6nrBUPf8O03X6OznNevv6KqJPFP5xrbdrx9+8ZzuBXmqsOuLtjvdxSlpPjUdc397R3ff/+Gqq64uBC3hOl77u8+cDwcuL+/ZX/c8v79G7ruJK1rEIeqXThaEJdFnmPsIBhmmXDiYRCv/TBY+sFQOCdgxdbiBoPKJH37/tTSto3g2TqJ6RoTeaFHDNfR7xj2sMCfK088YT29cTi3Sj1CQFDbwrqFXqLES0b6GIZh6kWwI0dzzo0dScZD/EEq4TbS4Iq4S7R3so1iTuR+GrkPx5yInwRnSYjTWhtjg4ELBujN1WoTuVjww/SdQGWuqhrtu4OE3P48z/jlN1/z7u171qs1h9MRlUnLHJRFO9jfP2CBrjtKxGAYMEo4Tp6XPtPB8u23v6QsS549e0ldS3Ow7f0tD/d3nE4H7h7uOOz29EPDMPTRUYxzHE8Hqqqi78Xo2O124KBciPO378VAKMvSB80zhr7HlUJ4+8Oe4/EoE5Up8lISMNEZDJKxq9Bjy6C5GhIozo1xSqle8MZYYmGm6zH6yZJqdESPdHZMuz7XYjz8/mSOvw3mrIeknLs1VLRm0nJ4dXagE0JidFeMg5kSZXp2+Lvve1bLDafTifXFhqZpKPOCssjoMYClaVv0oDHWsLnYcH1zydt33/H9999y/eyG3XbHZrPGOoNSjv12i8XQ9SIKu1646ul0ZLd/4OOH91hrOJ4O4CRjt+97vnv9NZvNpaA62oGH+zu6rqNrjlJNPvR0doiTH5TtsixjaKiqKu+E7ajyGrxDV5zAhedUjlNzoh0G30JRCppRmmpRe67uCcovuohJO5lPHVwSPgN1DAPJEbwcc33aU2dUoazPssVzRq3URLJEqzRIIGfI0NMucXLypM5lItfD5wK2m4MyOBfUu9F8TY9JBCHcM7g+xOyJ7FdcB36CnFQVye/yAm3fcXWxIDQc1Sj22x3WDgymR1pWi2iRziM9b99+z3q9Ybu7Y7/fcTzsyPKMuipomoN/d7EujYNTc2AYOpr26EFQPMYrTmCwnEVhGdqj73oyeMeoQdkEuz4Uz/gkg2EYIoGlDRm6vkGhqCspLGm7jrquxc0ClFWBVRJGW9RLmuaE1orVciUWvg5QEsR1JOFgo6Iv5rFkbPh5tWJtat+2O9KVcyLyAtPzPkrtjYigOkke2tPxHgdYNROXMYnN+/e1ClEAX4AQsIW8LFcJ1aYWzORBiesiOurGLO5EJ9CR0MYvfXhNKd9yz7E/HLnYaGkkUZaUdUHXWDBSwaS05uLyksViwffvBLZ86ASKabO5wjnY7ba0zX4U/fFQo2thGHBNurPDFnMe+txKMa0b431hUsOGxHlHpd/AwzAImMrge2nmgtajnODXSi6cJCQWuRS3OAVlLbUGImIztMpB+bRxI6qNsUOsaEqXYBRjEue0niOLH1JgHlLEzNATJDDD0F4w81VPwpntyCH9mwf1yvqXd85RaFEVzhb3aseMOkcLLgg85wzYDKVDhcr0OBtc9btMnMl+wR6dH1VGFEEHlDGYvqd3jtPBSeFtUdB4fH4BJcl58fwZL1684OPte95+/x1de8IaEU99d+J4OrBeX7Dfd964GBXY8GhjTLTGIGXOKu4x00+La5RSArfOqDiP3oFRd1Fa0/dNvCZwrcNxx3KxIq8qDs0J5xQXaoNTiv5hB8ButxMu/UyyOpTrcVZqHrRKE+cf85a5L8skhAKp159oMYbrQhc7zSgenQvtFGVe7JxiPOFansAni/mW6UyRQBQQiGIUhWPFt4urEl4s6CbRYAhca7zbuWHgCLDuEmBeLBR5pkT/6Toe7rYoJQXEVVVxc3PDer3hcNzz+vU3viOckUZXStH51J++v0MpF+GmwljHkIqbTJoeMZPiqK1yk/kI40Ux1kF47hWgnowxwrm887PrOyqko/Cp63BtxmUlqd+mN5Jl4hzWuyMWiwVa51xdXvCwvZsSzkyvTY2t+VoIs5FVTgt8nRsXPFwf3slZN0mrJuF+lsdAxlGHd4+/e7zQqTHAzM8VFYHRCkxR18JOUXh2m9zp3G5Lj7SeQHBiM/bHhs6nuGgVsCpyikzz/PlzFqslRVXy+rvv0JnUVpqhQynHMHTgDOvNkizkh50R7fEzR0QfjM5NraZczf8TutYG8Yl1qLAgRS4N5xHYzOA5D07Uru+wTvo9te2Rh4d7uqFHlZIa1HatwBEYy6pecHmxZrPZsNvvGUKjW2/yWWP9z8iV03dyzid3hjI5CxmZdPe2/kVgkvOvGN1OwdqPnNk7ZPXsObJ+IwPS52KUjw8Xf6Ll4n0x1uNNSNB41FvCAOOOcONncRCPnWPTBbcW54w008pyqYHMq4ly23U9KhNguqurK/b7LXkmBRf397e+PmBsNaMcUXeZlIAlOqXym2QcO37nhk3nCO6coEcqpSS5EfFFheul4FXjjOBcODumC4naIWAxINZn15+4f7gFrCRlOlHOV+sF6/Way6srrLW0XTPOl0rXZZQyzuIJTsUf5dHsJOFZxXLAOOfnLMzkkHUTdd8GI+5XHDpeOLtRuNn4XQC8m+JlRBC85B7nOMTZ41cQeHh2UZSJjqQEn8v7wayz6Dxjvd7Qtid2u3sOh3vevPkGZoq5UpLmEyp8Ju/uHj/3iVE9+i3kujkHBjA6EZmAweEyja4KBqSrnvHNHAJnCFZjnmu6/sT7D2/QueLZs2cUi4pyuWBxsWaxWHB/f8/pdPB6pPMWb+A00znXITnPje8l8+E3gE7dUlMQlrnP89w6nuNe8/N0+uHcnTElUplIpZikdkCqqofPpkrxtDD0Uzslfdnxvjqk3RSF37E+CdBjiD17dsN6veT1t7/ku+9ec3d7K7j6g+8WMgwM1sS6gLOTlIzJJsUx+J/5xClvWYnPbXJHjP87Q4FOuKhX9FOuGQqScU70NZUJfGfvePP2Daf2xOXFJRebDZnS7Pd7bu9u6fvOtzSUZRQ9yUsclY4mmGp4w8SXF44yBZ1nolfpqd4dUsnjXJxZw/lannPm5pOTgt7uhzfZs1HGzxfBL5ArgMFjLWTxBYPITAeUfiafh6kIdsN4HghUkzHS/AEUTTOQ59Lq78WLV7x48ZJvv/2Gd+/eYMxocaXvVpdVTNd+lEgZxuTHpXRi7s84dHBNjAGRaeImDpSR6wYc+CYXDgnuD2YAX9GjtEKFOlLjQFm6U0NZV6jMcTzs+OoXP2Vzfc1nr37AcrEmz3LJ7GjbOI5QPR/aM4eEgaByhPx750SnDrHFsKGDby8YAUqpMe/fGwMpQ5nYg7O1RIWsmnHNn0S/9pd783i6NSTa7+LCqDDvEwTa4KSdWjvnFjgxSJMjBGwH2vbIarVhGHy1UpaxXqz4/Nc+58Xz5xyOW15/+zXGdhLATvCzwq4L3Xmf2nmPiOmTIjMd5uw8NQLxBnz8kHbdNI3sbC0GwEioI5fQmcRks6JAKuRb7u/e83B3z3p9wc3Nc5qTOKDHAXt9y6+t85p66qOM6tvoZo8iNFqCSmGHscAl3n68eHKPcQrG9XWJGyS8UwwrqUCBo96OKLVjUnZw4IVqbe0VQOVz3zOVCbdxEhZB20RZHh86Z7XnxehoLGgUz66usUp7/LCSly9ecX1zye7hjp/+9M9p2iN2MNgZF5tYOVo/Qo5Ox4Wa+oHi50yJMA2djdkKYS5kGQYcuVMUWcaAnbRmthA97LJBBTlJa/E3SYmbIysrHBpjeqzpMNtOIKb6o+dSwfzVPp1aQnzBIInz7adfnO0eXCWxPqPbxhMYQGhqrj0IzrljHkIEJj7WQEp5aAjlPPmlojLlMKluL/9qr6N5i8ULfUn/CBM6srewiIqpbyYw4GARzRmDtVLZ07cDl8+fsVzWlHlNvVjw7etvef36LzkethJMDpZgckRfzoy7hX8nnPUJO2QSmmFqEEm4iWT3y+dGQeYcWZ4lld7MGumGe4zzgJKGrcoKBKizjsVqzWAFdfFweEBlo9XrjWyi+1dJXcTEpaASdICEe8ZNrBRE+M5kUyk+SWBnK5tcQmhKKCk/nk5c+HL+cL9MjUSR3FW+tkwJIjQYcBZrBGbKKcETk401uuqiqqnSyVWM8UsIYhLCLrNgBw6nPepO050EN/bu7iMPu1uG/pSgND6ekLkYTI/UIAmOZOHACRmMdEXgEtGHl/aV8Up1ODNTYkG2fSfEPL6dF2nEs128N15fk3sEmKfmGDI4BixGqr5DKMuOye9WhVqI8d2VUpErhdlWWlwqcV7sjPMpFR3o8aJw7kRXOz+v8XPg1HbkbTdMb5hO+hPHxHUBkaOl3FBeViZOB9EQCPWMCEr/nj/fDgNNe6BpD2gPu2TsAMpghl7EN0nG5r+qPjUbw1wXUeF/bnp+fI5XiAOXSLmzvHe6UCp+HgnKz4UKYmN86iP+cTqdgFm1djSQxhT2MH+Td1PEtJw0EP4py/Dc4VIjINHnHh3RioLddo/6a3/td9z7d9+B7/wBoAKYmkoGHDhMsksmd0smSOkkVMGUnY4GqVT/BLl8ziR2Tgp1M6XJihxZJBtZdRC9+B7cqSiYT/ZcwZ1P9OS5KQOPYxxFUzw/6KR+XlJis4zvpJWPa0Yuo2JBTchscCGd3Y5pU+cWMxBZ+lkMwk+IIaxZoh5YEZc6jHc215NVDGOdcK4wx4nide5QgFEobVldvUT/0R/9kedAkm8UQiBeY/cPCH+HO0giY6pgBt+W0inRnXkRJ5wver7PHClxWGsZrPFB8BB77JOe3MGnF1b/X21Hzr3c8Wd+TpTxyeufG6/yxBmI2evkyvnFjVPqU5Ydo6XnnOf63veGwC/JfRN/5ExEhR8TfG3p3MkficEhYHeBRub3COeE93j0roE7RDr49KGUIssz/uv/5r9FdV3j/spv/joO6V4bM3oYHxQGdt7jL19GjqBtmG0/y1NO4bUNJIKQiV0wY+/nxJ3oCY9FqR5pC+vxW89NQuQqcz3s3POS1nqPRDqjjylwssn3UYebX5feXlYy3DvF309vpxMuGfTBxyhIT6sGKUGNitvj8+JcBO75ieNXbeD0yIqcP/1//inaGMvDw97HHT91ozF294j6k3PkJ2P0k83wZtXkH3+PqXh79ORgYifoNOO5oZJdPVrY9Pq5u+RTz5tfMx/nJz+b6XXhcEh1kFNMuM5kDPML45g//T5PjS2It5GLMoYxSRQdP+ZfRWDhub9K5xWJYPjw8Tb0Zs/5n/7e35PGBcE3FexsNR381Nj0u9CzdNBeVCrEBB1BctMGEzKA0aM052JPWYlaa5RT6CRxRJ6qQz+J8cMnJki+dhOdazIxyXNVYvafu96F7zzUeNRZA+efZZomdx45RphjRtGqJs9/rEvG+ROFLt7/3GHdY+swUlvgBwoPjXp+3ubqxKeel85dnuf8d//D/yjrNgyD2x/2vHr1kpurDcqNuPtxcsOYfF5TKmak6AQgA9Wf3f1Tl0XYP9mox50Z5Pw+WmuUR42xyscCtaZwAhI87x4bxzcnEjW+07miliA2Ytbn7JTUskw3iAo6mX9Fh4tl/UqldYvyb7rxglsDiI5apZQEvpWUqU3m3DlC1nuKpv6rvAL+JL87x039FP5++H4yN7PP5vOrMylmef3dW/7iL79juVyglVIs6gX/6Mf/mDzPI4TQ9GHTB04WUzLw4jnzh0s671ykjO87f4Hz8zLdRcGPEx2QZyYjXZQ50Qbl+9wx4WBndm/qFpiOL5mCZLHTBXTpZel4kudrYERzmeJUPFIZmKsuqdR5LFLDs0KbnLnxM5mfT6gKT/0uFxvINP/z3//fqBcLlPKyR2vNF198Qdv00/OTyfG3TNj2mHgorzvG0gIRyAAE/nJCaH5FRjcIj154TjRxssPETgc6cTjOOVpYnOm4plGAcJz77Nx9g0UZudjkmeO7PtWHaLqQ09oGZZ2EOhJ15ewmPKOrpeMLP3P3jMuUB/R7vPmAR5LsKc527rldZ7jfNvz2j/6NKCm0UgKAdn19zT/+P/+UrKwIzQykBxGoYAkqifAbM8SNqHyWpVYFipwAPRR3nxKnqbAHhUOjEWytwDVmI47IfCNxmshhFeJzyoA8gl2Jr11bb20mOz/VZwDxdH9CpXCM6M2oMen83CKLWZOEspwDZ3HKRE6DIjZOGPUxvKhTk7CWiGoHGnSehLvU+H3kkOPHca4DZPx0OmdqByp4qyLHns9Reo+nJIwi2QxOnPNWOerlgv/l7/8D1uvNONZQffLHf/zH/PjH/we///t/CDpjcAMOSV1On+McSWzMxgcmj37yJUczOdzrU9YdXlwBZL47riccK2nGzkbjPyrdYRhzEZmKr5C6HbqzzYcfcdfcWJ3zqQlP58Z7wcaXiCJrfDERm6HHUwhKu3HsWsX3BaK3fjo/0zjvfGxPGVBBDeCJ78M5n7JeRY+TphlKCTh/lgvMwt/5L/8rfv3X/0rcPEoplPWwe6EIdbfb87/+yT/g7/wX/zmff3ZD15xQaDJdxBePCruC0ecV9pYTX9n49sF9Sywp9Tj4RMV/KpY9aLsPgQQEv0EakeqxDF9uH6p0xiiD1Uwcneeq2s8eilGEzDrUTquyZ0cCY+nSsT1aQ+UzZp1wMT+HgUPLTDzmRIGxphJC+05vKrG2J/rfI72aR5+nx68yGs4TuWyGvKx4/+EDf/fv/vf8h3/rb1N5qNSU6wIC85jnBVdXF/xn/+l/wp/92U/44rd/h/cftuz30s9bdJuJ+hrmLnn440HFFw07yCu1cunTfhe5JnDHQCwu4S7nOao8ZpzMlP3PRcHcQIg3SM+ZjelJc16F/6mz8Fti1QqBqSc5yePFn4835RIp4YXKqLR24VPK/fz+c3fFOZ0unGeMYX888v5uyw9+4wv+5H//U/6jv/0fe2UfGL3MW1veAAAAB0lEQVRw/H+b2Lc3Rg46LgAAAABJRU5ErkJggg==\" y=\"-21.499943\"/>\n </g>\n <g id=\"text_1\">\n <!-- Train Image -->\n <defs>\n <path d=\"M -0.296875 72.90625 \nL 61.375 72.90625 \nL 61.375 64.59375 \nL 35.5 64.59375 \nL 35.5 0 \nL 25.59375 0 \nL 25.59375 64.59375 \nL -0.296875 64.59375 \nz\n\" id=\"DejaVuSans-84\"/>\n <path d=\"M 41.109375 46.296875 \nQ 39.59375 47.171875 37.8125 47.578125 \nQ 36.03125 48 33.890625 48 \nQ 26.265625 48 22.1875 43.046875 \nQ 18.109375 38.09375 18.109375 28.8125 \nL 18.109375 0 \nL 9.078125 0 \nL 9.078125 54.6875 \nL 18.109375 54.6875 \nL 18.109375 46.1875 \nQ 20.953125 51.171875 25.484375 53.578125 \nQ 30.03125 56 36.53125 56 \nQ 37.453125 56 38.578125 55.875 \nQ 39.703125 55.765625 41.0625 55.515625 \nz\n\" id=\"DejaVuSans-114\"/>\n <path d=\"M 34.28125 27.484375 \nQ 23.390625 27.484375 19.1875 25 \nQ 14.984375 22.515625 14.984375 16.5 \nQ 14.984375 11.71875 18.140625 8.90625 \nQ 21.296875 6.109375 26.703125 6.109375 \nQ 34.1875 6.109375 38.703125 11.40625 \nQ 43.21875 16.703125 43.21875 25.484375 \nL 43.21875 27.484375 \nz\nM 52.203125 31.203125 \nL 52.203125 0 \nL 43.21875 0 \nL 43.21875 8.296875 \nQ 40.140625 3.328125 35.546875 0.953125 \nQ 30.953125 -1.421875 24.3125 -1.421875 \nQ 15.921875 -1.421875 10.953125 3.296875 \nQ 6 8.015625 6 15.921875 \nQ 6 25.140625 12.171875 29.828125 \nQ 18.359375 34.515625 30.609375 34.515625 \nL 43.21875 34.515625 \nL 43.21875 35.40625 \nQ 43.21875 41.609375 39.140625 45 \nQ 35.0625 48.390625 27.6875 48.390625 \nQ 23 48.390625 18.546875 47.265625 \nQ 14.109375 46.140625 10.015625 43.890625 \nL 10.015625 52.203125 \nQ 14.9375 54.109375 19.578125 55.046875 \nQ 24.21875 56 28.609375 56 \nQ 40.484375 56 46.34375 49.84375 \nQ 52.203125 43.703125 52.203125 31.203125 \nz\n\" id=\"DejaVuSans-97\"/>\n <path d=\"M 9.421875 54.6875 \nL 18.40625 54.6875 \nL 18.40625 0 \nL 9.421875 0 \nz\nM 9.421875 75.984375 \nL 18.40625 75.984375 \nL 18.40625 64.59375 \nL 9.421875 64.59375 \nz\n\" id=\"DejaVuSans-105\"/>\n <path d=\"M 54.890625 33.015625 \nL 54.890625 0 \nL 45.90625 0 \nL 45.90625 32.71875 \nQ 45.90625 40.484375 42.875 44.328125 \nQ 39.84375 48.1875 33.796875 48.1875 \nQ 26.515625 48.1875 22.3125 43.546875 \nQ 18.109375 38.921875 18.109375 30.90625 \nL 18.109375 0 \nL 9.078125 0 \nL 9.078125 54.6875 \nL 18.109375 54.6875 \nL 18.109375 46.1875 \nQ 21.34375 51.125 25.703125 53.5625 \nQ 30.078125 56 35.796875 56 \nQ 45.21875 56 50.046875 50.171875 \nQ 54.890625 44.34375 54.890625 33.015625 \nz\n\" id=\"DejaVuSans-110\"/>\n <path id=\"DejaVuSans-32\"/>\n <path d=\"M 9.8125 72.90625 \nL 19.671875 72.90625 \nL 19.671875 0 \nL 9.8125 0 \nz\n\" id=\"DejaVuSans-73\"/>\n <path d=\"M 52 44.1875 \nQ 55.375 50.25 60.0625 53.125 \nQ 64.75 56 71.09375 56 \nQ 79.640625 56 84.28125 50.015625 \nQ 88.921875 44.046875 88.921875 33.015625 \nL 88.921875 0 \nL 79.890625 0 \nL 79.890625 32.71875 \nQ 79.890625 40.578125 77.09375 44.375 \nQ 74.3125 48.1875 68.609375 48.1875 \nQ 61.625 48.1875 57.5625 43.546875 \nQ 53.515625 38.921875 53.515625 30.90625 \nL 53.515625 0 \nL 44.484375 0 \nL 44.484375 32.71875 \nQ 44.484375 40.625 41.703125 44.40625 \nQ 38.921875 48.1875 33.109375 48.1875 \nQ 26.21875 48.1875 22.15625 43.53125 \nQ 18.109375 38.875 18.109375 30.90625 \nL 18.109375 0 \nL 9.078125 0 \nL 9.078125 54.6875 \nL 18.109375 54.6875 \nL 18.109375 46.1875 \nQ 21.1875 51.21875 25.484375 53.609375 \nQ 29.78125 56 35.6875 56 \nQ 41.65625 56 45.828125 52.96875 \nQ 50 49.953125 52 44.1875 \nz\n\" id=\"DejaVuSans-109\"/>\n <path d=\"M 45.40625 27.984375 \nQ 45.40625 37.75 41.375 43.109375 \nQ 37.359375 48.484375 30.078125 48.484375 \nQ 22.859375 48.484375 18.828125 43.109375 \nQ 14.796875 37.75 14.796875 27.984375 \nQ 14.796875 18.265625 18.828125 12.890625 \nQ 22.859375 7.515625 30.078125 7.515625 \nQ 37.359375 7.515625 41.375 12.890625 \nQ 45.40625 18.265625 45.40625 27.984375 \nz\nM 54.390625 6.78125 \nQ 54.390625 -7.171875 48.1875 -13.984375 \nQ 42 -20.796875 29.203125 -20.796875 \nQ 24.46875 -20.796875 20.265625 -20.09375 \nQ 16.0625 -19.390625 12.109375 -17.921875 \nL 12.109375 -9.1875 \nQ 16.0625 -11.328125 19.921875 -12.34375 \nQ 23.78125 -13.375 27.78125 -13.375 \nQ 36.625 -13.375 41.015625 -8.765625 \nQ 45.40625 -4.15625 45.40625 5.171875 \nL 45.40625 9.625 \nQ 42.625 4.78125 38.28125 2.390625 \nQ 33.9375 0 27.875 0 \nQ 17.828125 0 11.671875 7.65625 \nQ 5.515625 15.328125 5.515625 27.984375 \nQ 5.515625 40.671875 11.671875 48.328125 \nQ 17.828125 56 27.875 56 \nQ 33.9375 56 38.28125 53.609375 \nQ 42.625 51.21875 45.40625 46.390625 \nL 45.40625 54.6875 \nL 54.390625 54.6875 \nz\n\" id=\"DejaVuSans-103\"/>\n <path d=\"M 56.203125 29.59375 \nL 56.203125 25.203125 \nL 14.890625 25.203125 \nQ 15.484375 15.921875 20.484375 11.0625 \nQ 25.484375 6.203125 34.421875 6.203125 \nQ 39.59375 6.203125 44.453125 7.46875 \nQ 49.3125 8.734375 54.109375 11.28125 \nL 54.109375 2.78125 \nQ 49.265625 0.734375 44.1875 -0.34375 \nQ 39.109375 -1.421875 33.890625 -1.421875 \nQ 20.796875 -1.421875 13.15625 6.1875 \nQ 5.515625 13.8125 5.515625 26.8125 \nQ 5.515625 40.234375 12.765625 48.109375 \nQ 20.015625 56 32.328125 56 \nQ 43.359375 56 49.78125 48.890625 \nQ 56.203125 41.796875 56.203125 29.59375 \nz\nM 47.21875 32.234375 \nQ 47.125 39.59375 43.09375 43.984375 \nQ 39.0625 48.390625 32.421875 48.390625 \nQ 24.90625 48.390625 20.390625 44.140625 \nQ 15.875 39.890625 15.1875 32.171875 \nz\n\" id=\"DejaVuSans-101\"/>\n </defs>\n <g transform=\"translate(48.199347 16.318125)scale(0.12 -0.12)\">\n <use xlink:href=\"#DejaVuSans-84\"/>\n <use x=\"46.333984\" xlink:href=\"#DejaVuSans-114\"/>\n <use x=\"87.447266\" xlink:href=\"#DejaVuSans-97\"/>\n <use x=\"148.726562\" xlink:href=\"#DejaVuSans-105\"/>\n <use x=\"176.509766\" xlink:href=\"#DejaVuSans-110\"/>\n <use x=\"239.888672\" xlink:href=\"#DejaVuSans-32\"/>\n <use x=\"271.675781\" xlink:href=\"#DejaVuSans-73\"/>\n <use x=\"301.167969\" xlink:href=\"#DejaVuSans-109\"/>\n <use x=\"398.580078\" xlink:href=\"#DejaVuSans-97\"/>\n <use x=\"459.859375\" xlink:href=\"#DejaVuSans-103\"/>\n <use x=\"523.335938\" xlink:href=\"#DejaVuSans-101\"/>\n </g>\n </g>\n </g>\n <g id=\"axes_2\">\n <g clip-path=\"url(#pf02e2d733d)\">\n <image height=\"153\" id=\"imageb081ed1ee7\" transform=\"scale(1 -1)translate(0 -153)\" width=\"153\" x=\"189.818182\" xlink:href=\"data:image/png;base64,\niVBORw0KGgoAAAANSUhEUgAAAJkAAACZCAYAAAA8XJi6AAAABHNCSVQICAgIfAhkiAAADEVJREFUeJzt3V9MW2UfB/Bv2xUKsm44Fkt0Gpdtzo3ojM4MnasJ4mYUsmxqvMBsmYlx80+miboLY7hxGnahhkVNVDSMhKibYEC2MmAONkWQbR3IYIVRkilsDAoUO1tOe96LveN9m7bQQp/zPE/5fRKS9ZyTnm/gu/Ocnp4/usLCQhUAkpOTsW/fPoTz2Wef4dVXXw07j8xOTk4O6urqQqZXVlbCbrdzSMSOnncAEmzr1q1YuXIl7xhxNVWypKQknjlIAtMDgNlsxltvvRV2Aa/XC7fbrWmo+WBychLj4+O8Y2hCn56ejjfffDPszOvXr+Pzzz+PuK9GZq+xsREvvfRS2HlpaWkwGAwaJ2JH/8Ybb4Sd4fF48O2330YsIGEnPz8ft99+O+8YcRN2x9/j8aC0tBR79uzROg9JQGFL1tPTg927d2udhSQoOoQhqOXLl8NkMvGOERdUMkFZrVYsWrSId4y4CCnZ6OgoSkpKeGQhCSqkZCMjI/j00095ZJl32tvbceTIEd4xmAsqmdvtxnvvvccry7zT3d2No0eP8o7BXFDJPB4PysvLeWUhCYp2/AlzVDLOqqqqcPDgQd4xmKKScXb16lU4nU7eMZiikhHmqGQC27lzJxYvXsw7xpxRyQRmMpmg0+l4x5gzKhlhjkpGmAsqmd/v55VjXgsEAlBVlXcMZqZKNjY2llBnY8rk448/xgcffBB23oIFCzROE380XApuz549SEtL4x1jTqhkhDkqGWGOSiaBjIwMqY+X6QFAVVU0NzfzzjKv9ff3Y2BgIOy8HTt2SH2Fvx4AfD4ftmzZwjvLvPbVV1+hoqKCdwwmaLiUxJo1a6QdMqlkAmlsbMTFixfDzsvPz4deL+efS87UCeq7775DS0sL7xhxRyWTSE5ODu8Is0Ilk0h2draU+2VUMsEUFxfjzJkz0y7z3HPPYfv27RolmjsqmWBaWlrw999/R5xfUFCANWvWICsrCy+88IKGyWaPSiaZ5cuXT/377rvv5pgkelQyiRmNRuzatYt3jBlRyQS0a9euqA5l6HQ6LFmyRINEc0MlE9DQ0BC8Xi/vGHFDJZNcamqq8LddpZIlANHP0KCSCcpqteL8+fNRLbto0SKh7/E7VbJEum98Ioj16iWRvwnQAzce3jUyMgKTyTT1I/ommATT6XTCbiimtmRmsxnXr1+f+jl58iQWL16M1NRUnvnmtfHxcQQCgaiWzcjIwM6dO9kGmqWI+2QbNmyAy+XCoUOHYDabtcxE/uuxxx5DX18f7xhzNuOVo9u2bYPH40FhYeHUtMHBQfzzzz8sc5EEEtXlyQUFBSgoKJh6/c477+D48eNRrUBRFHR0dMwuHUFnZyfuuusuqa8k16mMb8IwMTGBp556Cv/++y/++OMPlqtKWAMDA7BYLDMud/nyZXz99dcaJIqN3uFwwOFwoLe3l8kK0tLS0NTUhJ9++glPP/00NmzYwGQ9iayurg6Kosy4XEpKipD3M9HdfAa5wWBAfn4+gBthWT2iuKurC7t378Yvv/zC5P0T1ejoaFSPwent7UVZWZkGiaI39enS7/ejoqICFRUVqKqqwoULF5iscPXq1SguLsYTTzzB5P2JeMIewnC73bDZbGhsbERXV1fcV5qVlYUDBw5g8+bNcX9vIh7D448/XhhuhtfrhdPpxODgIEZHR6GqKm699da4rdhisSArKwtOp5PZ/mAimZiYwJNPPjnjtZculyvq7zy1MuPn4uHhYfz222+4dOkSOjs7Y3pzvV6PZ555JuL8devW4dFHH4XNZovpfeejgwcP4sCBA1Ieyog68ZUrV3DlypWY3lyn00FRFGzdujXiMs8++yxaWlpQXV0d03sTeUQcLuNlaGgI/f396Ovrw+rVq0PmL126FOvXr4fD4UBPTw/LKNL79ddf8eKLL057xkVKSgoMBgP6+/s1TDY95tveQCCAvr4+GAwGXLt2DRaLJWQIXblyJTIzM1lHkd6JEydmPAUoJSUFS5cu1ShRdDQ7adHv9+Ovv/7CuXPnwg6N+/fvR25urlZxpLV27VreEWKm+Zmxfr8fbrc7ZLrFYpH+Brxa6O7unnGZe+65R6j7ZnA5/bqnpwc1NTU8Vj0vGI1GZGdnw2q18o4CgFPJAoEA2traUFtbGzS9vLxcmF+MyKI5v89gMGDTpk145JFHNEg0PW4XkgQCgZAnoCQnJ0t7ozctRXsun16vF+L3yTVBuE9KIvxSSHxx/Yu2trbixIkTQdPq6urw0EMPcUqUeER4ZpN831GQqKiqilOnTuH06dO8o4hZsoULF0Kv10d9pc58E+lpvqqqTt1Dw263o6GhQctYEXEvmc/nw+TkJIxG49S0hoYGWK1WnDp1iooWxvDwcNBrVVUxMTGBgYEBlJeXc0oVGfeSNTc3w2w2Izs7O2j6yZMnsWzZMly+fJlTMnmMj4/jk08+4R0jIvooR5gTomQulwsTExO8YxBGhChZa2srHA5HyPTc3NygfTUC5OXl8Y4QMyFKFklJSQkWLlzIO4ZQZHzIlzAl6+vrg8vlCpn+8ssvS3nKMfkfYUrW3t6OwcHBkOkffvghkpOTOSQi8SJMyQDg3LlzGBkZ4R1DWEVFRVJ+tytU4osXL2JsbIx3DGG9/vrrIef3+3y+kFOmREM7OxJTFAWHDx8O+8lcJEJtyQCgtrY25GuTmpoaYW9VyUsgEEBZWZnwBQMELNng4GDIgxI2bdok9I13eVBVVajL3qYjXMlI4qGSEeaELFlpaSkdykggQpbM6/WGnEc2NjZG+2WSErJk4dDzBOQlTcmIvKhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5JJymAw4O233+YdIypUMonJcnNAKhlhjkomkcnJSd4RZoVKJhGz2Rzy+EYZUMkIc1QywhyVjDBHJSPMUckIc1QywhyVjDBHJSPMUckIc1QywpwcX+PP0d69e/Hggw/yjhGipaUFxcXFvGMwl/Al27t3L959911YLBbeUULk5OQgNzcXFRUV+Oabb3jHYSahS/baa69h3759uO2223hHCSszMxN5eXl44IEHoCgKDh06xDsSEwm7T/bKK6/g/fffF7Zg/++OO+5AUVERnn/+ed5RmEjILdmOHTuwf/9+pKen844SNYvFgiVLlvCOwURClWzLli0oKSnBLbfcArPZzDtOzD766CNcvXoVR44c4R0lroQdLr/88ku43e6ol9+4cSN++OEHZGZmSlkw4MZJiaWlpdi8eTPvKHEl7JbM5/NBVdWoll23bh3q6+uRlJQUcZmysjI4nc44pYuPe++9F9u3bw+alpqaiqqqKvj9fqxfvx4dHR1T81wul5QPlxW2ZLHQ6/XTFuz7779Hb2+vhomi09HRgaSkJOTl5QVNNxqNMBqNIQ+1N5lMWsaLG2GHy2itWLECbW1tEedXVlbiwoULGiaKzXRba7vdjhUrVkj/dDzpS5acnBx2uqqqOHbsGOx2u8aJYnP27FnYbLaIZXM4HAgEAsjIyAiZJ8tFJVIPl6tWrQraZwFuPABeURQ0NTXh999/55QsNs3NzUhKSoLVag0ZIm8aGhoKeu33+1FUVKRFvDmTtmTp6eno7u4Omd7e3o7KykoOieamsbERRqMR2dnZUu7cT0eq4fLOO+8EAOh0OixbtixkvqIo8Hg8WseKm/r6erS2tkJRFN5R4kqqLZnT6cTatWthMpnC7uz39vaitraWQ7L4sdlsWLBgAe6//34YjcaIy127dk3DVHMjVckA4M8//ww73ev1Ynh4WOM0bPz888/Q6XS47777whZNVVV88cUXHJLNjtDDZX9/f8gD78Px+Xxoa2vD8ePHNUiljerqapw/fz7s0Hnp0iUOiWZP6JL9+OOP8Hq9My43PDycUAW7qbq6GmfOnEFnZ2fQf7aysjKOqWIn/HBpt9vx8MMPR/xo7/V60dXVpXEq7Rw9ehTAjS//p9tHE5nwJbPZbJicnMTGjRtDjnz7fD40NTXh9OnTnNJp59ixY7wjzJrQw+VNDQ0NIUfEFUVBfX39vCiY7ITfkt1UU1MT9DoQCODs2bOc0pBYSFOy6b4EJ2KTYrgkcqOSEeaoZIQ5KhlhjkpGmKOSEeaoZIQ5Khlh7j+IobnQcdL/mQAAAABJRU5ErkJggg==\" y=\"-21.499943\"/>\n </g>\n <g id=\"text_2\">\n <!-- Label -->\n <defs>\n <path d=\"M 9.8125 72.90625 \nL 19.671875 72.90625 \nL 19.671875 8.296875 \nL 55.171875 8.296875 \nL 55.171875 0 \nL 9.8125 0 \nz\n\" id=\"DejaVuSans-76\"/>\n <path d=\"M 48.6875 27.296875 \nQ 48.6875 37.203125 44.609375 42.84375 \nQ 40.53125 48.484375 33.40625 48.484375 \nQ 26.265625 48.484375 22.1875 42.84375 \nQ 18.109375 37.203125 18.109375 27.296875 \nQ 18.109375 17.390625 22.1875 11.75 \nQ 26.265625 6.109375 33.40625 6.109375 \nQ 40.53125 6.109375 44.609375 11.75 \nQ 48.6875 17.390625 48.6875 27.296875 \nz\nM 18.109375 46.390625 \nQ 20.953125 51.265625 25.265625 53.625 \nQ 29.59375 56 35.59375 56 \nQ 45.5625 56 51.78125 48.09375 \nQ 58.015625 40.1875 58.015625 27.296875 \nQ 58.015625 14.40625 51.78125 6.484375 \nQ 45.5625 -1.421875 35.59375 -1.421875 \nQ 29.59375 -1.421875 25.265625 0.953125 \nQ 20.953125 3.328125 18.109375 8.203125 \nL 18.109375 0 \nL 9.078125 0 \nL 9.078125 75.984375 \nL 18.109375 75.984375 \nz\n\" id=\"DejaVuSans-98\"/>\n <path d=\"M 9.421875 75.984375 \nL 18.40625 75.984375 \nL 18.40625 0 \nL 9.421875 0 \nz\n\" id=\"DejaVuSans-108\"/>\n </defs>\n <g transform=\"translate(249.721278 16.318125)scale(0.12 -0.12)\">\n <use xlink:href=\"#DejaVuSans-76\"/>\n <use x=\"55.712891\" xlink:href=\"#DejaVuSans-97\"/>\n <use x=\"116.992188\" xlink:href=\"#DejaVuSans-98\"/>\n <use x=\"180.46875\" xlink:href=\"#DejaVuSans-101\"/>\n <use x=\"241.992188\" xlink:href=\"#DejaVuSans-108\"/>\n </g>\n </g>\n </g>\n </g>\n <defs>\n <clipPath id=\"p58ad9a7e6d\">\n <rect height=\"152.181818\" width=\"152.181818\" x=\"7.2\" y=\"22.318125\"/>\n </clipPath>\n <clipPath id=\"pf02e2d733d\">\n <rect height=\"152.181818\" width=\"152.181818\" x=\"189.818182\" y=\"22.318125\"/>\n </clipPath>\n </defs>\n</svg>\n",
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAV0AAAC2CAYAAAB6fF5CAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nOy9e9BtyVne93u7e1327bucObe5acRF6MJFAgwi3CxLBklIKiiqcLliBSjKuOxUkn+SGEyRyKbiFGW7SjjGKZKq2KGgHBcxRQVfgGAnNgYHy8SSHYONGMFIMxpp5sx3vtu+rrW6O39091pr729/55xBo3M0c/ZT9Z2z97r2Wmuvt99+3qffV7z37LDDDjvscH+gHnQDdthhhx0eJuyM7g477LDDfcTO6O6www473EfsjO4OO+yww33EzujusMMOO9xH7IzuDjvssMN9xM7o9iAivyQi3/eg27HDDl8oEJF/KiJ/+n7v+1rGq97oisi09+dEZNH7/qdezrG89+/13v/0H7Idz4jIH//D7LvDDvcDu9/oFwbMg27A5wrv/Th9FpFngD/tvf/Hm9uJiPHeN/ezbTvssMMOm3jVe7qXQUTeISLPicgPichngb8tIoci8g9E5JaIHMfPT/T2aYdDIvL9IvLrIvLX4rZ/ICLvvcdzf7+I/IaIfFhETkTk90XkG+PyZ0XkxT6NISLvE5GPishZXP8XN473vSLySRE5EpH/pu+xiIgSkR8WkU/E9T8nIldeiXu4w2sfd3snIr5ERD4Sf5//R//3JSLfICL/Iv7O/42IvOP+XsGrD69ZoxtxE7gCPAX8GcL1/u34/XXAAvjJO+z/duB3gavAXwH+FxGRezz324F/CzwC/B3g7wJfB3wp8EHgJ0Ukeekz4HuBA+B9wJ8Tke8CEJG3AP8j8KeAR4F94PHeef5z4LuAPwo8BhwDf/Me27jDDvfyTnwv8AOE318D/A8AIvI48A+B/47wnv1XwM+LyLX70vJXK7z3r5k/4Bngj8fP7wAqoLzD9m8Djnvf/ymBngD4fuDp3roh4IGb93Du7wd+r7fuK+O+N3rLjoC3XXKsnwA+HD//t8D/ttGOqneufw+8q7f+UaAGzIN+Hru/L6y//m/0Dttseyd+vPf9LfH3p4EfAn5mY/9fAb6vt++fftDX/YX296rndO+CW977ZfoiIkPgw8B7gMO4eCIi2ntvt+z/2fTBez+PTu54y3bb8ELv8yIeY3PZOLbr7cCPA18B5EAB/O9xu8eAZzfacdQ7zlPAL4iI6y2zwA3g0/fY1h0eUtzjO/Fsb5dPAhlh9PcU8D0i8oHe+gz4vz+/rX5147VOL2ymUPsvgTcCb/fe7wHfGpffK2Xw+cLfAX4ReNJ7vw/8FF2bPgP0eecBgbJIeBZ4r/f+oPdXeu93BneHe8G9vBNP9j6/jjCSeonw2/uZjd/eyHv/4/ej4a9WvNaN7iYmBA/zJAYDPvSA25MwAW5775ci8vXAf9xb9/eAD8RAXA78RdZfiJ8C/rKIPAUgItdE5DvvU7t3ePUhE5Ey/RG827u9Ex8UkbdEr/jHgL8XveCfJfw23y0iOh7zHVsCcTv08LAZ3Z8ABoRe+jeBX36wzWnxnwI/JiLnBA7359IK7/1vE4Jlf5fg9U6BF4FV3OSvE7zk/zPu/5uEIN4OO2zDPyIY2fR3wN3fiZ8B/lcC3VYC/wWA9/5Z4DuBHwFuETzf/5qHz668LEgkvHd4lSAqHk6AN3jv/+BBt2eHHXZ4edj1SK8CiMgHRGQoIiPgrwH/HyESvcMOO7zKsDO6rw58J/B8/HsD8Cf9boiyww6vSuzohR122GGH+4idp7vDDjvscB+xM7o77LDDDvcRd5yRlnhD5xzOWc7Pp/yjX/pl/swP/gCP3rxCtVwgKLTKsNbivUdEEIEwecsBChFFkJZ6RPUmTokggCB4FIIH8XgfDiBCWEY3y0GHoyBKAAVe8DQoD0opRHzv8Dp+cqSUCU6F88XrQ6n1fufS1AoS2ysCzvd3QJS0134BLt0TwffbdoHVEUSpcHVeUPEeeu9Jd8nj1vcQwcdF6fzee5SO7en1qUqpNE2z/T9tn9Bf3sel17axX79dIoK1FpMX3HrpJT784b/JH3/3d1CUZbu+vfAHgL/0l/7Sjlfb4fOKD33oQ1t/23c0us6FN/oXf/EXef75T/PP//mv8S9+459xcDBgtZyBV4gIzrnW2HrvCO+gRGMbPgf41tClY4fVwSBbAeWjMWD9bQxbdJsHYxONuO8MiPdu86VeNxgia/PU+gZl2z6bRkmQjXU+Xut227G2bX8T1Z2zPYWA93H44TxOYqfg14+X7ncfqfNIzwHAu+7amqZpt7sbj7/NiPbXbR5j814pCc9XoXBVxfWre/zlv/yjHB0d8Sf/1PdSlOVdDfkOO7xWcUd6QSmFUooPfOADvPOd7+Jf/svfAGcxYhByvO8ZDII9c61nFw7dM1lr3+CiQcP71i7d2bOi9YTBorVuDa0ohdIKUWrNo21P7bskP5vtcM7h4rrkuW823yXXUrr7s3kt29B57a1rSrp57aXG6/cEw6a1jsdNVjQY0nS9AKIuGvxwfevf19uyvZOQNES5ZH3aZtMj3jTAzllwDu8tHo9tFE3T8D//T3+dZ599pu1wd0HcHR5G3NHoJuNzfHzMO771m7HVqqUavLfRaYwvlw+0gNamfeG9KBBwvsbTgHRDYBFBvAGvo7PoERyOBpeOd7FBwWQJgUYQD2istdE79ngJ2V4aXDRVLpg6BU7CMVoD3XrH0YhodcfBrgA6ee8+HNmz3XgE8qPzOoMxU4iPHQTB4/bWdR5z6nQEvPi10UDoOAQcuKY3SvDd+rbz6Ba391prfYFK2Wy3w+MDYxMeyZZ71D/GZUbTA14JKAUieO9QXljOF7z/fe9hOj3fGdwdHlrcNZDmnOPpp5+mKLO15a2n1S7pUpd1HG7gT6XHYngfjEl46WzPeLYHBqQzRP1z9Yxl/3hpuJuG/mt2U6QdyveP1f+e9u/axQUDddmybceVSGEkj3P9nN21bjten5uNR+s+S8+Y+c7obTVgG53HpjfaN6jtcg9iPeJCB7CNQuhTG93zvkjPbDtvnmsO9kp+7+P/Abczujs8pLirp7tYLvhj73wHTdO0HmVaJ9FAAttfPO8SF7B23G5ozMWIUrIv/cPc4QXtG+FkRJxzkVq4GBjqe239HJf94112tuR9p2H45jW3x9ngYHu3KXp+HZ3R77za3frt6Z3fQehAlLT0Rhe8lHVel/UOapNO2RY0E6Lh3bi2zQ5l83lso2o2P4edNVjHd77v21kuFni/HhjcYYeHAXc0us45/q9/8k+4+sgBYdCegkl9+EhPrntdyWv0cbDber8+WCClhEQgrL+cvlU0BMHA5fwi9AweXaCq2z7uuyVo0zc6rbFq97o8kt9eAuue8cWGtU77JpO9ZhwT39wfzuNTK6RlwiON3TOYm51fL1jn4x3peaKXXf+2drcf7+K53g2b53U2BFlf9+RNfuPX/xnW7ozuDg8f7mh0rW34Tz74QWxT0xow51srsO7V0PucotjJQjm8S9ZAgU/qBR8CV3G/1kPbcHf7BnCbIXDOBQ60J6kKZ3XBc3O9hVvQHh/ZIuW6eN41j3fL/qmzIHYsPhrAZFz7krN1OiTxutLdYyLH2jPe4fydQV0zrukaXHf8bVA9brrXmGjZ6c6fnvcW9GmKO1EMfXjvaZqG/+zP/cC6gmWHHR4S3FEyprVif3+Mp07u3SUv1UVvajOC3r3JluTFbVITbVCIzj5e6nH21gdvdZtWNhj1taH7lv0TLlInd95n83o32OSLXqH3bDtsGtZD6qg2PPi00dqAwEeK42Lb7qQu6G3ZdTDpGFtsYJ9G2hLa3HreOyEcS3P1kSsYre+6/cOOr/3ar1377pzjox/96ANqzQ6vBO5odP/BP/iHWNsgyiHo1rtNPKUnvvtx+34Qqh8Io10f/M9Ov9tfF40IFx3STc52k2Ns27OFC03H7VvyTdnTZoR+s13t+bYE4da+c+GSL24XvdS1ZRKmMSTD3dIJ6ZhrpxE2LqbHg3f3Z1uQq/PIe+2Uvs292HF07XyFAl8C4hTiHb/6q7/Me977/lfmuK9RfMd3fMdawLVpGrIs4yMf+cgDbNUOnwvuSC986EMfihxj0BG5NPBPw1BIY+nem9xNUOjzjt4TKYZ17zAhGU6RqLG9xNHse29KKYzS5HmOMRlZFv601mitIWlYe5703Ya/24JI7d/mNu14/y4G10dPNhnERBcIQWfb3tKo5JXOww3n8uGP8MC0qHhcaY3k9o5I0EoF77l/78KXOKsvdJzO+o5K3jhG2iZdx4Vr9S0pfSmF04f3HttY/sqP//d33/ghxjvf+c4Lv1djDO9617v4pm/6pgfUqh0+V9zR0/2df//b3Lj2CM71HKNkENa4wLQsLdTRzvXHxAJiIQbVNofAF4bqauP7Fu8UQItQlEMg8JRN02BdA2LxVYWPw+JtHvLm8dPny9ZvXHK4Ni+XGlwIRqr14iVO8xVBicLHQJxqr63/gvVUAhdYivXOqv/9ggzNd/use9e0Iwvvehx8GDJcPFf6P7bdpZmH8Tip/d2iiyqHdD/SVO9P/N7Ht9+0HXj3u9/N13/91291EvI851u+5VvQWvNrv/ZrD6B19w/vec97yLIgV/37f//vP+DWvDK4o9EtcnMh2LFp/DYRlnURdqXCtFW1ZkQhGJ91A6IuCZSFfbZH4ZUxlMWI8XiPshyglOL4+IjT89t4ExQGztoLx7kX9K/T94xOaH00IMnG+c77Dl99t52sG1MR8M5tTKdNNEfnMAoSpnb0PXUu0ruDwYC6rsP9UAprbTfra+OaLly773ViyYBe0kndTbrnhVZ1cum2vT54snevhZUfPrz1rW+9oy68KAre9KY3vWaN7nvf+17G4zFvetOb2vvwWjG6d6QXhoNB9yW+7dYTvMd1uUJYrSC8VVFKJcloKZR2hFlnGXiFqLhc1lnEPu+Yps2G4TVIG5ZPBkGBMowGY/YODzl45ICr167zlW/9Gr7kS97MaHyA1marsU7nuqwDSUarz+fK5oyuzhb2eNrwrVNBBEvpoooBPNaD9kKR5YhSODrqRiQ8lJbKiPdBiQ8z5lRYonSgGMrhCOs9ymiMztCmm5osKp0RJPIZfY/eObfOEsCFvA6iJNI93b7OuR7h3CHNc7mM6ujIkLDroMgvPJMd4Lu/+7spiuKu2z3yyCN827d9231o0f3F+9//fr7ma76Gt7zlLWsdzwc/+MEH2KpXDndRL4TosiTvB2Kgx/cnedFz1uJnh5Is7hOHoXH6a7BUMZNYpBH6L+mm95eG2WnUu74taKPJCsN0es7JyRFa51y/doPHn3icvcmQ3/3d3+Z8eoKtmzh9uWt3OlbKodCf/HHBSG9zMel5zht0RDiGYl2nG+6g9uE+2sauHcv1vFxiUCu2Jtx372msQytFYfJAp7gVznuMyRgVe1T1FNu4rjnR+/Q+cETO+nVjGeVvfQ+936GGXBRuzZgm+ZtSCrd5Q+g8/k3ttiBhKnZ7O1+h4NxrDE899dQdvdyEPM9bdcOv/uqvfr6bdV/w/ve/n6/6qq/CmIum6Yu/+IsfQIteedz1ySaPKHzpr0npA1NgxbeelFIpKCYgGhGD89HASI2oaL43PNCLkrPLaIye14Xn6OSYW7ducXJyxgsvfJY/+MTTfObTzzMoR7zhDW+iLIYoo1G9RDHpHOl/ay2XYc3D633f5kGv0SBrFGrnGhrCULx2FmfX8ysoQFw/8U8MYtqQ/NJojcnzNsBmdEaRj9jfu8r1Rx+jKIZhnQocsojgvOsMuay3P50oTfToL+urIZTRnTGOs+K2GdzNe9uHk06VtjO3Lw9f/uVffkE+BoFmeOSRRx5Ai155vO997+OrvuqrWg53EyLCn/2zf/Y+t+qVx+eQxFza4FL6S15VyHHbGyKveYjr7uJW3rR/lq30a0wRqQ1FMWQ2m7NarVitVtTLFefnJ/z+Jz7BJz/5SfJswBOPP4VWOUqZmHN33eDn0Yhtnn+bUb1XPjhe3Gaz23kRNo4ctNZ4gbIsyfIM631UL3T3pw3kOR/aGikGYwoODq/xxjd9OW94w5u5cf1RysEEpXpeQhzze/GgaEcWrUHtdWCtYqK3bE0tYvSFe9fOHIw/gs17uBb8ZP15vqx7+RDj9a9/Pb/zO7/DRz/6Ud761rdeWP8lX/IlfPu3f/sDaNkrh3e/+9287W1vu9TgJly9evU+tejzh7slMe9ejLVRaTKqfm1Fisq3XlMysNIAOtqgBhGzJr9aMyxhYWuwfMtDrk8sSOahqmqsdRidIwhlaajrGmctt269QJYZnnjidUxnM27d+iy19+AanOvOuaxWTEZj5vP51kBbS3akdnmi4qC9HT3uN7UfPBZBp50Bj1fhGrUTnBKyLMNVFXVdY4yhUTHDmgNFyJqmdPBaszwPp7fCcDLhicdfx97+PuPxhCIfUFUVw9EeJ+fHUC/ARadUabwEmgBUq1aQSAGo3qjDRRVKmnVnjMFaG+gC182s69+dRD9tM6FrnVkaGfWe+Q53x6c+9SkgvI/PPvvshfXGGIbD4f1u1iuGd73rXXzd131dS2e+1nFXT3eblGpz6JyqQ3gP1rmoPU2eUDxO6xOpteP2h+x3jpB3uQR8rwNwzgUPt67DUBwVtLpKs1qtODq6zXQ654nHn+Sxx57g8MoVRGu0CakOjTEYpVksFpdee99jTdNng/EJf1spBkCJZv1yAp8LwdPF+dbz9t6zXC5bgxZoGtXKuELib4t1DVkmPHrjUQbFgNOzU87Oz7HeMR6PuXJ4hSzLI5UC4MI9jjkv+gazT1P7GOwLwbXWz8Y1FoVsLI9VOnrXu+0Zbhs59D/vPN2Xj+PjY974xjdeWP6VX/mV/IW/8Bf45m/+5gfQqj88vvVbv5Vv/MZvfGgMLtwjp7v5PXk73TpHqBjRy30gEpf5NaN1z8P1u7yQ6dx1XSE93Za1Nc45rG2C191YptNzimLAZHLAaHTAo4++jkSP9JOQDwYDsiy70NH0O5g7e2f+wqdgVMNkDw1oF+RiCRpBrMOtagyCisqGdoaaCGU5wDlP0zjybMC1q4/iGs/R0RH1YkU1X7A4m7JYLDg4OGAwGLXBPS+qVSX4nkGHpEKQGAD1bXtB1jzb/l/rEW+REm5iG3++M7SfOz7+8Y/zFV/xFWvLlFLkec473/lO3v72tz+glr08fMM3fAPveMc77hg0vHbtWnBGIrTW/Pk//+fvR/M+b7ij0b23F0Tav3Z6b9RSKR2NmqgYPOuYXs+6nCgtAy4Y6u48XbuCjEkjTvC2YblcUjerjgJAyPMMbxuaqubk5ITxeI/Geuq65uDgShh2+563KbQBtbSs37C0Db12B9agH6DqAlgp2JjoAgf4aAyTwfc4RCtMnnUz8SJBKiIURQlAVVXk2YCD/SuAYrlcIpE6mE0XTKdTTk9OUEpR5GV3v3z/uSTj5xEVnk+n84oZ4SSsc3iMMReChnfqdFJnLEigLLYSDju8ElitVluXiwjvec97tnK/X0j46q/+at797ndfamPe8IY3oJTipZdeurDu1e4Vf07VgNfkW6QMW325FyCu8467FFxBFRF5Q7vmNfVVqtuRPGoRoa5Dnt/xsCTXZq3kTrVqqK3jpZdeYjGbU68qnnjsMZx1VFWDNqHkkDE5iGZ6Pse6IJPa5pH3edt2Gm/0YnH9BD++/S/Er6TlTVHB+wRwddPqc20giqMhDRUX8ixHibBaVRTFkP39A3KT4SsbZGN5AUqDVsyWC07PppyfnzMZjzGiWm4WAaVV/FtXJ/SNqtY60CcCosBiQYNX8UJgfaYgtNnP+uqHoJhIgsGLVMJl9MMO946nn356q5oh4bu+67t485vffB9b9PJwJ4furW99K08//fRr9vdxx0DaNoSJrGzYRRd92GAww8vXvVidsVr3VvvrEw/q6UrvtIG4LUZY6F50Ywy1DRH6YZkzHJYMBiOWyxWrxYLVasXt27fRRnPz0Ud54onX8dxznwrJfAiKgDzPmS9mjMd7TKdn+C3FH9sh+JrXHVsjXRKgtL7PgUpYEKb+xnwLxpi219ZaUxQFy9kcH9uktGK1WlFkBcOiBA+LxYJhMWQ4HFIMBtQ2nOP8/BytNSenDZO9g/W2927hndQiIfl790ySt9+WQ+p5/Gl/R9BaK9a57nR2zcXsZO2z31ENnxOcc2EElG+fZPIn/sSf4Gd/9md55pln7m/D7oI3v/nNfOADH7iwvK7rtdmUCcvlkrIs71fzPu+4Z6ObDKEVwPeLeyfZEVGETxj2xyq9gUZYP9Zm7oD4gTTsbQVnvtu+X1o9bJEqEQPi0VlGluUMRmNEC06ELDN4mwdPTylu3z5GG8ONGzeom4bnnvsUTVXR1J7BaMheNuD8/KylNtYMV/yslEKLWu9MeiEpiZ2Ai4G//nRc50NAS5S02rG+AfPOtZIZpTXL5ZIsKxgNJ3jnaZYVe3t77I33KMqC0XhMXg5YrSq01kynU24fHbO3f4iXDFE1OId1DTqpKDYke/3rDF5xSJITDGrgoVvOLVIp7TXFy0/pKPverpKkbfFtJ5R0vYnS0a9RT+ZzQZIv3gs+9rGP8a53vYtf+qVfYjzePqX61TKLaz6f893f/d38yq/8yoV1h4eHNE3zqqcVEu5odPsvY8qfEMxd35pKV+q7tUFJ+nXxeNsM7prhbQ0eLQ0R2NBezodoA6wH6z3GKJQSxqMhog06y8AL1XIRDYSjrmu01pydnrK3t8fVq9c4OT0l04qz82POz09QYsjzjLIYMp2exWsH70OiHqMNmcnQOkPHGTNN0wSv3Id0hYIjL4chgOUb6moVPEXnaZpodOLkiK50fRjar6qqnR1XNTUmz8izInZinjzLGZQDlBbG4wnD0YhlVUfjbEAszjfgmyA/swrnLFpp2hhn75m0QURnwfuolOh4eetsoE+SzQ2MRbC9Lnj0XgI9lOiTwFP7UBU6nqfLmEY7TKqdbbOf7dDhB3/wB5lMJve8/a//+q/zPd/zPfytv/W3GI1G7O3tfR5b9/nB2dkZP/ADP7DV4L4WcU863c740jqgSUCfNJ/9aD1ig/8rNm25VY1w0QDLmjA/LIdkKbqaYC3JAeIQ0RRZzmpVMy5K6qpCTMZ4b8J8fo7UHu+knep7+/YRjz/5JDdu3ORTz3yCyeQQvGE4HOB8GEo3jcVhqZZVlH8Jg3JEkQ8YjMcU5QjnLPPFLAynfQjQaZMxmeyT5Tm4htOTY6qqolrOEWXwTc3KNeE+Rf2tMYbJZMLZ2VlrgJumIc9KclPgmsTh5m2UGqBa1dTRA5jOzgJH7Dyz+Sxw6S5U6SCOOsJoZX0GXFI5hOcaMp+p3v2WnrKh/S1oAeUjoetbDje4tkERoZJs8DL5GKD8zui+EvjlX/5lHnvsMb7v+76PD3/4wxweHj7oJr0s/PAP/zA///M//6Cbcd9wV3phnQME8K2H4r0PEXdPKw9LovrwsiragjXeX5CGXPB8e+fsG/pWRJBkS3HrtG+WZTRNw/7+hKZpGAwGeOeorcOjAUtZ5CijyYswweD49ik3bj5GtVzyqU8+w3g05vx8imjFYDBguLeH8jC1p7gMstywv3fIZP+QycEhh4dXMCanrmu8d3z608+S5zmPPHKdshwwKEvOTm5zunfMYjHj+PSY6ekJdbMkb8IEklAWPQyZqqpq8wGfn5+TmUCXOOsQL5g8J8uyVlGglDAoB2TeM51NEXQwlt6xWiyjF90vea/wPmio+4Es530I7HkfPHWRlqGHmA9iy7NKKoj05FQMKHpCsM0733ryrRa7p93ORF8wxDt8bvjpn/5pBoMBP/ZjP8a1a9cedHPuCZ/97Gc5Ojp60M24r7hneiEgVNldl3bZNQlVXBGG3G0wLXCUm9jUbrpoYbdJKtJLn0T5zjnEd7ypMYaqqiiHgzhRQjBKkeUZy6ZitlqiaxWm3CqFtQ1NY3n0scdZLmcobRjujSmKEIRTRuFWFS+Ipq4bRqMRBweHjEZ7XLv5OIdXHmE8nlCWJYLixo0nKMqCvb39oBOua8qiYDze4+TkNuVoyPOuoao0y9USRQgcIMJwOKauK7IsZzabBuOq85BYvPEMyoKiKDBat4bXZBmiFbnW7Ok96rpmcX4K1tNUdTBvzgYJGBINbpfPN95VEg9NUmKQ+OewhYJ25lrf8K6NgFzqEONz7dVVcxdSWHbPfmd0X3n81E/9FMYYfvRHf5QbN2486ObcEc899xw/8iM/ws/93M896KbcV9yVXui8U48oh6JLGhP+D5ypYNphf/BoQgrGpHfoi++3BtLoybA6qx7Xb7yc0s1vSwoA59yaZzUYDMi0wWMpyzBFdj49p66rYChFsVwuGE8mPP66pzh66ZirV6+xWq04ODggy3OqxYp8MOL8/ITBYMhwOKbIB4zHEybjPQ6vXKEoSwbDMTcefRznLM47quWK+WyKaE0+mJKVA/ZWB9Sriun0jGFTo3Ccz87IswGZyZjPZpF/NWidoSTDNZ5MK7RWGKPIUu4DFatrGA2isXbFaFAwLUpsczt6wqF6hm3qnuF0Pa833EpFd8OTnrflY9c4dokZyqK/muQY8Zk6l0YotFx8okrWotFK2sTuu4kSnx/85E/+JMYYfuiHfoibN28+6OZcwGc+8xl+67d+i1/4hV/gZ37mZx50c+47XoZkrBO7r3uovXBJEt73g2obHO02g7uNeticHNF5UiGQp0SD7zwv53yI/HuP1jrkMlBdghZjDKPRKMqfAo/q8WRZwcDvceVQU61qrhw+QlEUOCXkhXB1MOTmzUdZVSu8BaUC9zscDhnv7TMYjiiKYQg8+Rrb1FT5iqIsGa8WHIkizwyLRclTT3lOp2fU1QrlPUe3XyTLcpbLJUYbTk6OKYohxmRUdUNmFIOyJM9jKaIipxwOKIshg8GQcjjEe2E5mzE9n7Yefr2q0Mq0ErBOxqdocx1D6OFiXuO+56lInq6PmTh9b/+k2ZX2z7VaMryut5oAACAASURBVNpCoGmSSj8/b38245audIdXED/xEz8BXCxs+YWAj3zkI/yNv/E3HnQzHhhepk53u2fivceLxBy7QnxtQxCH7kXdpntdeyGh86ToAj89UcTasNZFQ+xjhC8N10UJRVHQNDXg2ymt2hjyvMB5oalDkG+1WkSFwJI8N1RVEwzr/gRvLM5CUeYolZM6kGpVoZSmLIeMRhOMCTkWrFM0OvCteV7Q5BnT6RRjNIIwzXKuHF4FPN5ahsMh1jYsFktuvfhCrHyhaRqLEk2uNEVZkJlQB64oyvA3GCAmQ0STZYbxeMT09ATfNJgYhNMmZFSzPiZ9T+ZXFCLpvpGiohe4V9UTqLSSth4XnCqBpDJNfQ1v2/ludtK04oWWK97h84dkeHf4wsK9B9JwrR5zEyKC4PBeBx5XHD5OVRLpEmpvcoL9/dvj9I6fZlMhkJJ4h+VhWcgvm5JvWxrb4CrPaDSKCgBLlukYaAvGR5mQDMeYjExnrBYLqtWS+XzOYDDEORey9ltH0zQopRE0V6480moonbOIBE2lyTRGhykAdeMQspgUR2i0YjgcsVwuyJucoixwTYNtLDrPycwBq9WKwWBEtVqRmQznLdPzaTtLz7sQuNRKYWLBTZ2FzkNiACsrSrKybNUGdV3T2CZ0aj5JthItEDkcggQwOrSBCFI65IWII5Ww3rWdWheUSxMjaJdvBkEVcqFTDdx/6mB3fu4ODyfuKZAGIHgkJBtc53R9KgnTnz8WvwkgNr7EqvWk2nIyGy/x1jnJF3i/NHstFq+0DkQFraj3DMuS4XCIUopyNMb7UC1C62Ac6sbiddDXLmZzGteQFxlVXQHCaDSiqivqxbxNaWh0xnhyncFgQFEUIb9BUcTcEik5jG0H3FppdKbRWnFweMh8asKMt/mM1WLO6eyYsiwx0Rs9OztjPNmjyApOz08wWuOqGpMZjDFkWRGMbCuYDXRJnpd458lMhifwp4hHK4Otatoe0vfyJnQuaTCCPS+V9vl1PG9nONcNb/s5mk/Ve5bpdxEfVwisqXVvecfn7vCw4mVxurDOw6aZYsHI0pndfi0uOqlZwmYE/LIodj+Ithl4kRjISVI2EaEsy9bLTcdtGguEwFFVrfCuofEe11TMZwJKMRgOKQdDRsMRTdMwHI/whOmHx8fHTMYHoA3DyR5FnDJclCVaG3Ss82bxWGdxLuQ50FpjlMZP9tBKka+WzOdXOFeK2fk5i9WKq6PDmJayoSxKlIdimbNUGm8MOuZLMHmOzgpUnqEyg1YKrRSZyVBKsVx6hsNh4LRFyDKNm63fr00uPag+Ek3QGchNvlXFAJzEUU7H7QbiyCad9qYNjTyCdS7mn4gVhKUfHdhhh4cPd1YvREpBttjEFLxuQy8ioDoez8UXNfCCsmas4RKqQYIg7YIN3mIwiLOeUmcwHI7IssChZlmG1prcZFTVKlRn8B7b1HjXxKrDYX+T5eiqYjAc4X1I7zifzbl9dMRiteD4+ITr1495vXwxo+GQohyEKg+ZaWVwoeNJ7fQoUW3y8LwsEBV44eT8f+r3/wAxqq3ge3BwwHw+wzqLUYoyL9AimMy0eSGKsiAvSwbDEYNyFBQOSpGXoYBhXhSoOKzP8gKthabqUTQ9zrXzWtdvdOo4+x1cMLZhhJN4+bRNCJpd5OnTwdrAnI7J0tP0aXV5R7vDDq913IVeYM3gXdBqQnRs4/x63/N45GKik01czF+ggH6tsuBFJ8MN0JZr855QKVi1OWtnsxl1XbeTDIqiwChN08REGrYJNEDsDAInG4bwq+UK29hQfWE4ZLVacHZ2zmq15IUXPstqtcRZG4f7Iedua7y9i7QL0TClGxMmKBhjyLRGRHF+eoZoQ7Wc8eKLS/b39wHajiHLMrI8awNY4XzhnGHSxSgmNBfqpkGsoRwUFHmBaM2gKINMy1sk5qfwWLxPaTahJXI3YK3rpgYnHt15lNJY17S/ibaCRPsjuJwuUPEYfYWK9z5UFd0Z3h0eQtyVXmjlPxEXJV8+Rr+3vcaXv1Trnk7wsbzfLA65Tk20Q1w6TaiPErH5fN4K8YuioCzL4N1aGwtlhr8m8rTO+ZBDIebUFQlD6boOuQy89xilmE1POT8dslwuaJoqtsV1bY6ebbCxXSIY7z3WN0BonwJwnizPGY6GjIYZR0dHbZIYrRTlYMC0qdBGx04i3A9RoTiloFBKk+V5lMxBtViiyoKiHATpl3OYmOKSjXvXjjaEdkTRVyS0HHuSF0TJmLW2ra3Wzjq8RGd7YQST6OEUQEvPfSOT1A47PCy4s9FNcZgNQ5tyLjjvQlYwiRHy3pA0BVs2OdlN1UK7LCoghMAhxi26pvQ8q35+hm6mmm+9UGMMTdPgGhsNb6gmkcUkNa1HmWVM9vYwxnB6ek5dN4gIeZ4zGIw4Pj4OXq3yHB/fxjlHY2usy0KCGRVNsPftLKxk6DUgOqgYtNKI72bT5XmOrS17e3thW63xIpRFSdNUVFXVG00I3oVJIEWWtWqNLMsQQtrF2WyOT9pjCZ2OFoWVwGenuNZaNeOoONg6zPdhllq7JvWoQf6AUtIme+97sJuB0bTMpfzH7VMN599RDDs8jLgLp9txpl66BCWJJ0yaz5C0ryV46TxBANnqEXUnifFv72IUPB0kBOkuvphRH+HX0z2m3K/GGKrliizLo/cqiGic89imamerFUWOcw3n56dM9iYYozg6eok8z2mahvF4zGw2Z75YMalCPoemscGYe4+1gdqwtsHkeVv40dtwHTWQiwTDqxTiPVqrqMVVVNaGjGjasJxNEYKiIjM52oTlokJuC6UUTVNTVzWiF1gci8U8SMhMxmIxI8syhuMh6kgzGAxwvbykIgpliPkQImee+FaVZqF1nqfDh4TnPnzulA/d/e8b2vSbSLx2/4mtTwPuio/6nae7w0OKe1IvbOow07LEY4oXUtbYTk3gEdGsUQMb+yfqQqSb698/50V5kZCMcTD4wejhgu4UhNlsxrAcMJ2e41wwloiDxqJ18IT7xSDruuLWCy9S5Dmj0QDvhdVqFbndEdPpOWI0y+WK5XLJuLF464Km1YZAo7M2Jl+2eO+wFuJMkThUdygx6AyGwwFXr13jk8+cc3hln9OTc6ytuH10grWWyWRCkZWooWe5WGJt4JmNMczmM2xQzqKUpnKOl46OuLJ/wOnpKYE0COcONi0mIbKentqsNwDZ7m22/LmkjLrh2cZ5EGvPtE819HzzNU82bRNonZ2x3eHhxp2NrgjWe7QHL4K+ROIVPNRuyNilXoymeC1yflH+1Z7uDk1JnlIqnJhoiVDnzFLXlrqeI8BqMQ9BMpPhveBtSBBusiArS9xtqgZc1w3Hs1k7+aGqLaPxmKLI8ALz6awd9jdNQ71agfeYLKPI80gpNDiXcusKeEvjLdqB0QLicB4GoxFXr17n5PSE1XKJ1prj4+OgpDAZjlDbzdVV7CCkzbGrdZro0XB2dpsiLxiVOS985tNAqCqRZVm8dkXTxKfTC3aFjG8u8r/rfO9Fvl51kyV6zy15r9ueaddRbv6U1o+/LQnODjs8DLizeoF+8MpfCHq3Ufr2Re4nw4FgdHs1Ji4RxrceUfjSvrCb00+1VpFLDIZWKUfTOHTmwDc0ddOmTAwpHU2gIBQQPVitNYeHhygVSuG8dOsWRhtms1nYN9MMJxNOTm+3eQO01jhrWS5n1PWKxSIl+vZhCjDgvcV5C15QOtyPxtZowElSEQRdcDkcMh6PeeH8jOVsxt7eHp9+7jNMJhPq5ZKsCDIwkxnqqibLiAbfsaprTJGD9UxPzxgMCsqiZLlYhDSTAlVVt5yrSKxUwaZBvVxxkAzrts5xLeC24em2x/R+nT/uQUuYmtzPybDDDnfC2dnZa6ZqBNzF6Ha1wDyqo3dbiO9iLH2eTqluwgLQ5t9NBR/bPzpT0Kcw0tLNFzPwqF2TRTkQCy4EwJQH5xXeCXjBGME1FaI1ZVmGAFZjuf3SUZjqSyjhbm2NKE9VraCB+XwRZpxlBu8anK05Pz/lfHrK3jQkKK9sjTEanMXiqZqapmkQZRCBzCvEgvUNWmlAxwThgdsdDkJ5lcbWVIslN649QlVVaC0sqwXlcNAGoJomXJ/JMpytcZWnKHKMgunZKWVZslwt0EpoqvpCpdg+n9rRP73n2dIG/e3DM1FxJp9zvqs+0dsueb3JUCfHeqtBjZ23IK2Oe4cd7oZUwuq1gruoF7rKAheGnj0ZEKgwpI4SoU72EAyw6w1Rw2FbvqE7XvgU/9zF4aoKM5q0cyGIp0JwTCsTttEh1aHULmYZaxAFRhtKrcMQXQSdZXijWa5CwcrFYt7KzZIUrSyHraqgHA0REY6Pj3n8ySeoVitm52c4P0DjMVkIlNWNpbGWDIXyghePowGfhfSKSsfAYND6WldTrxYoFxQW8/k8Js4JuYG9c2HmmTaIDvdotVyGoFpVc3x0G2stTV2xWCzaahODwYDz6fmapwusGbmUXzc9o34HmZCG/955HF1Zof7zC/RON5JJ/HzXk/Z/Sr5VS4hWmJgFbYcdHja8rMoR/WXhBbLRburWOAdVQ6IEoqerYm6EO/CAbWCu85tbY5CCNG0Z8+BKQfQag/GXYKDwbdIakynyPEMRDMhisaBaVdRNRVWtWK1WgaOta6pqRWODBG6xWDAaTdBaMznYb6mHs5NTRoNhCMwpj0GHmWAmSMJwLsi0fEj2rkQQLM556nqFiNA0DcvljJPbR1SLJdOzM6pVBYCtm0hFWPSgJC8KRAnNKnjRTd1gmwZlQl6Hpg4cc1mW2BjMGwwHnLz4/Np9FaGbDSYpwXwIRCZdrnc+lGfvnjJJapb27f8eLvuddMHOzvauyf020kjusMPLRROCFa9a3H1yxGVDQAG86uk3O2fW+w0DLXLBaN/1+Fw0zkgo9e1VYphD8Kl23fTULFPtcETixASjM+bTGYvFHPFQNzWrahmM7aqiaSwpkY9tHJVd4X1MflNV7O8fYK1lOp0ym81Cpi+tKUwRyuxIHjhtYhYtolrAhXG8dUFqhoQKv6vlAm9rXrr1AtPzM4zSIUeBCGU5YDQaBXmYWlIOBigfjHXwxj3L+YIsNwyKck3ytVwu2dvf5/z8rONaew+nu9UXZXiiet825GT3iu55sfa7SMfqa4/759lhh3uFtZa/+lf/6oNuxueEe/Z0N6PNQaq7EWRJVYGJ003FtcfYnBTRHbMf2IkHpm8g1r1tC2gxEMuZN01DlgcKoCiLYBCV5sknn0Ssp24aXnjx+eDV1g11HarzLpdL6qYBhMa6mHsXRELwzTnHdDplNNlnNag5PT3l6tUrzGbniHi0JuZEyNFxSqsoE/hbJSAaLQbrG6z1NFEjPJtOOTs95vnnPs3TH/84RycvUZqSoijY29sjL+Y0Tc14PMZWlsY0SJGTS9DMLqZV9JgtUztnENUYTdNQmIxVs6RarQIfrgUn6Vl1vG0oxx4mXXjo3esu9eLmve8/tzXZ4LZthdbgb1NGbB53hx0eJtyzp9sPaq2/RL7lQlN6vzTvUy5s28F739ay9ykrWesirZ+/O28o/WJt8myDLM05S56HKL7JMw729rGpikKzas9f1zWLxYKm1dXGXLHeonXoAJyzQQMby6HP5ueYPOPKlVCI0lqPcyFgZ63DE+quiWjEebQ25CbHGBMCh06hVDhPVS2pqiUvfuZ5Pvav/xXz+RwasBK0uGenZxRlEWaf5QV5lgeqRBmyQQYuSN/qZRWUESpUyMjzUCBTFznT+SKpdcEF+++9D1Lm3hNr2d0eCRsCY2lCyeUa3hQ06ysUttEPl00XvkwyuMMODwPuObXjRU43vKxpwLiWzIS+Z9UXzq8frztmMNKdqYaUTnD9Bd3wumPC9KpeURRlMFB5mImWKAZrM0ajEavVinPvcN7jbE8O5UOSdYWgjKEcDBHRLOdzqqZiMV0wHATPMctyynJAXgzwXuGcDdF6FSZd6DgV1xiDEIyXEo3zhlrVNLbm7KUT/s1v/RbHR7d48cUXoQGlBS2GmzdvIt6zmM/ZG49BwnAqTCzQmDzD1Bm2sWQ6TFfWCMvFIpRxP9jnE596Zi1AJa0rGyY3hOBXymnRoVVVq85Yrj/3jqRNx2jLrqfnzeX6283A6A47PKy4axLzy/lX30WqYzXJGP6iSw8Yt2w1SuvoDGp65RMR6DfWbxvShimtzjusC7rUkAbR4Jqa89USrQ1pJlSe54xGY5qqCQUWncfZJpalCcm8lXiuX7vJm77ibfyzf/qPcbOaql5RLRd462gay3iyx2AwCvkNVIESTaYzRoNeHl8AF6rviobGrxDRVIuKp//9b/Px3/53nN2+hRGPGEUxGlGYkulsisk0w+GQxWLOYDQiUyH5TZEX6HKAOEeuDDZOZ26qGmsdmQiTyR71crmuNHChlFJIl7PF4EXuoVWQJeHJRpCzT9J6XLtPou994BQuGNWdod1hh3XctRrwpetapYH0hqqsvajpFd0mO0toDXmbk7Zdc8FOp5lU6/sK9arGHJq2ICW1xeHJMtp8tMPhKMifaotZGqpqTlM5vAPlPVoLXgTb1IjkfNEXfRH/4Xc+Rh2n4VZVw3AwZjTaYzQaYoxhMBi1tciUVuQxibiNEjmlYvYxo9FKOD854aP/6l9gqwWTsmRZr8iKIY89+Qb2Dg45Oz/hpRdfYCyhC5rP5xQpCbgDrwWUohiWOOcZjkbMzqeUgwGiFY1taKp1jS6qX9EjzUjznVSsnefho+46lT+66Jl6H6t1sN4hJqmCuItql0RL7eiEHXYIuCdOd6ux7EVgHB132E1+6PZPSF5n/9jd8fp8cThqCrJ1Xlf/xe00p2l6rHOOs7MZ2kFeligVKIHhcIQxeUzB6Bk2Q05PYOVD+kTnHXmWcb6qqVczTm59ihtX9nlaGWpbU9c1q1WYYlsWA4aDEVmWk5c5RZ6TaYO44D1rrdDGtO12ziEWpudTPv3JTzI7O2acK7wxFJmnyQr+yNd/I14ZTK741x/5CIvZCVpnLBaL9p4558CEXBaZ0WRZjvMekxlsYzFac3J2GjKqsT4jzPdv2WaQi76BTduvF5Vcn20Wtm/pJNfpfdd+K9Gwr42Ndh7vDjvcY8Kbbcukk4slD7XPvfZ1mxe52XazbtR6wbinDGZ3kpWFk6eaaNY2VFUduFXvKcoy5M/VQU+7p4TMaM7Pz1jMc5QtsHUFKFRmUHWNVo7jFz/BfFEzyDNWvRSGw1Ewtt5DtVoh3uJ1RsMSqS3e1Jg8IysLdJ6FYb0IjWjOjo/5g6d/j8nAoAcl1WKOKsZ8/IVztFqgzIim8bzlLV/Ov/u3/y+TyYTZdIbSmsY2WOfIKfDOUq88g8KhPGR5xsxOEYHlcsGFaWO959cnbzY/x6cWOgkVRiYhdadqd+6eo2o5YZHI7abfw4VzdyXaL0gAd9jhIcS9T45IsqMeTysinUxMOrlR8praQNolVQIEFYb3uudJtf9fNLj943fStFBufbVaUjeCs2Byg9ImnFdC4vLMGLI8x5gwO20+n9L4GjKFszUozajIGGRCoTU+c8wMlHlBUZQopRlN9lFasVrOmZ0cU2qDH88ZDAdkRlNkhmI4YLi3j9qboLMs5j4TjAi3X3qR/cmAQgpWuSYfFfz+7QWnJ59hUpZUVmOKfcbjMaPRGFBYV7dD/uVyQV1XbdVeay0TMybLDfhQ70xrHfJK9p6fUqHkkov8baI9EovuW7o2VABxKbApgQVWnjgNPOwf7LpvnwmOXhazmI+BuI3IBWO843l3eJjxMgpTdpD4zzp/F+RGmzpPuPiCrU9HTTPPoOc+pbNseEXJsDu0zsArJCbCqOoV0oDRZSydrlgul23mMGstw8GAvCwpywHaaE5eeJZmcYqtFYhmOCgxcYpqkQllUaDLkslkzJd+2RsZjvawzjI9Pebs+ecYKE12sMRPRuQi+MwgkwnGeUyeoUxINI6EROZaG/aGY8aZYrEqAM8je2Pq+RlNPUVnBcvGMigzVqsV4/GEs+lp9BTjvXbgmoYmzM+L2dI0QlBsOH9RfZC802BXU+8Zs6G1gbBI8ShZf7bpOfRyF7e8b9JUt8fdeNay3pluft5hh4cR96bT3RS2p3XQk4pxkYeQ7RrN3pdWb5tswfpZNlUM4SRKpbIyBE/WBa2uKMWyDtzr+fl5SHCTyqjHqhFZkTMcjxlP9tifTDj+7B+wmJ7gnWcwHFA1DotCqYwbjx1QTq5z9epVvuEbv4XhcMT8/IijF19g8cJnaZRiaB26OcAbgyoKlPJoo8lGJVIOMDoE+B65foOvfNvXcPLp32VgagaTEbePb/PFr3+MUWEojEFlGfOzU1y1Yqky9q8csqcPWC4XWGtDWkedhRwMWsUUlZbRYECe55zMpyiTo9RyXTubomY9uF4n1w+Eeu9RPtTw8HiUC9t6icnafAqg9oOg6499LZC6xavdebk7PMy4d3ph20qJ8iEvLQ+YBq1pWu7lB477kpQNvZNEDW7r37ZcIG3J8KTTVUpRNzV11TAcDoNkzLk28bdtGoZ7+4gIeV5gHXgvFIMhV770OtdvXOezzz3Nye0jSpMzzkrGV66yd3AFM5hw7dHXcTDZ59qjj1ItFqwWc05uvYg9P2exXGBnC576oi+DPEd7mJ3eZnF+hhkPyPYO8UqjRBgfHvIf/bF38plnnuCZf/cRiix0FiMUpQk0iBJHFvNWaK2Dh5zl2MUS0QZEMRiO0Epjck1VrYIUTGmKosRkGcT0miH4FmvO+ahakFgo0vc6Th8pIxcmUKQxRm+sEffpRh7er+fR2Ay2rXXUbr3672Wz3XbY4WHBvdELWzg50pDXw1o0jc5Ib76M6SXrJ8EOBjS94j3PbE2Ols7bpzQImca8p2ksINR1g/dhmG2txVuLZDlZljMajSjKgtFoEmavDQoODw7Z/6Iv5c1veRvz5TmL1RLxiiuPXKMcjSjKMXkxQIvBSYNgmZ6c0MznNKsly+PbVKfnXL/2KNl4zO3pKfPbL/DoU09Rn53DtRpfFoho8rzk2o3HGY/HPHrzJs/83m9TO09TVeQKlNFUjUUBe4cTDq7fpCiGzJcrJnsT6qqiqWuqmLYxJD535Canbmo8YLQmz3OqpQq165A2ruYkPbck9OueUXrGPnq0kmR+nl4gLVQCCbXTthvQrYHS3m+mb3h3NMMODyvuyegqrfDWhXpZrf8TXs42au18q0JYe5m56NGszcdv5WUbutw4/E2ie5FQOcHaFJ0Pky9S3bKyGIaEL3t7VFXFaDQKHnQ8nzGGQTmkLAcURcn+wT7Xb95kvL/PaDBA64zGNiHnrTEYVYAiTLxoLE1jKQclTzz1RVCveL5pOHrueQ5yx2J6zsAoRGluT5cU84provEq3BNRGqNDztmyKBnvHTK5eoM3zt5OvVrQVCtW9YL5qqauwJoM50OF4/PTE24f3aKOmciqumYyHlM3dbwflrqqWVUrMpOTMrz5NXc2BMICLetaNUGgalLX5qMhjkXvU2XjXkrI5DnfqwHtqyb6v4EkgdsZ3h0eRtyT0b2Mk/MoZG1V8o78NrFCt9XWAEsvUEPnL3eMg49l03XMmRCc7boOofq8LKibBussJgvVgCf7e4zH47YsezqYdRZtDOVgzGAwIS9zRAmlGgA2FpJUrOoKowzWNmilUHnB+MoV3vTVX8vrvvj1vP5L38CnPvYx8iwH5xgdPMIf+bb3Mr5xneHVR7BKo330HHWgGbCewhiMuUE9WYVZdQ6ctdTW4mzwJJfLihdf/Cyr5QJjcpx3MY1jEWaolQV1XeGcxOrBrs2pq7WmqsP9S/mME9mTgpaO+L8I4rqJKW3du56+t9Pthk7E+X49kM2Aau8pyvr6fo20l5vBbIcdXiu4o9FNHJ+nrbPY8o1tIKs/hCRtE0vTpOTXW469/UXtHat33JSNbC3pdZws4Rxt4ppVtcJkQRpWDEq89yERjFQcHx8z2dtjOBwznkzYOzjAZEJmwMbKDGK6UuFNXVHVVVRJhGvKy5LcWZrGMBqPmYwO2CvGVC8d4QRUWZIfHpLt7yN5FvhcpTBahwkkRoH2YB15OUCbHOtCFjIclIRcoV6gLC1VUzOfL7B1A1hOj09idQrB17CYL8jyDO9D2XeHY1AUVMtpO3rwkQ5opOPZBUG5ECQjcr0QDGlgFVKhUdCKtiMIPwrajJ7tSML7jc6XNR633XWtMvCO093h4cTdPd12FhK9YWZKRtMZ2n7Zlt7OPd1uZ2S36TRbI07nbSX0jy0iiNOIEuraRS/Pc3p6RpZlbbXf2WzGYDBgOp1x8MSVkLvAGIrBgGvXb1AOBmhjWCyWOOejtCwkyamrVWsczs7OKPMC7z2LxZLhoCDPc5TSZFdzlM649Xsfh+kMBgNUkaMyg4u8t2ssDYEbFULnkZmsVVtoUyB10xotnYVH0tQNhwcHLGYzzs9OGU0OMFnBYj5nOp2ijWF/b4+qbjBGh7LzZYEyId+E0RnWNXgLKEWG0HiHSs+DYDyFoOFtjWovoOmhLV7ZseyJcY/5Knx/ZMKFZ72Zw6EvZ9thh4cR96ReSFNGE/qGs5v22wuUtcsvP2Y6TnsOkf77vjEJInm5CpFQzhyfjDE0jce5moODq7EKRKikO5vNyLKMW7dusbe3R1YUKK1DhWNjcNZhvaOum5jNy7JarTDRexQJBmkZq/YWRYGXDJMpDCAG1N4eq2s3mPoXsLmmFqidIycEpZwLmiulVKguQerAFForamsxJmvvR5tukjDT7tq16yzmc7yzONtg85y9vT2a5ZLz8yWr1YqyCFUulBLGexOmZ7dpbN3K7JJRbQf0npBngiD/0tED9pu6vZ5xbANv6bl21Hq3Qf9Z9vftPedt33fY4WHCHYk15Ql835aXJXi7QJw6mmqMhcCYiodWLSu7g6OjewAAIABJREFULenJ5V6PxP3Xq83qqE313uGAVVO3XrAxBV/2ZV8GhHSIdV1zcnLCahVK2hRlyZVHrjEcjSiKEmsDb7pcrtosZGdnU5yDo6MjFosFzjmapmK1WnB2dsL8fIqvK3RUUTS4EO0vc9SwpBFFY0OtNOccdR3yIHjvsXUdaAzvcda2xsqYMHMu3ZtUbt2YkC5yPB5z8+ajjCf7jIYT9iZ7DAcDvAhaZQwGJUpCxVTfWA4PD0OeCaVwLnmvIQ9vv4NUgBFBp/WpQ3CB2lGJt5XuubdtJEgEW1nZBnfbf96bHesueLbDw447Gt2k94TtBtLHwEo7cUFc1O26WDMsBlB6L9qmt7NttlL3cq6/tEm5IEpRNQ3eCbaRWJnB8hu/8f9wfj7D2lBc8uDggKIouPLIIxwcHHJ8+zYnt29zenqKaywiIQgVEuWc4WK6xMduPs4wGyA+lD6fTqcMy5K9vYOQ4yHxqo3DedBG0wQlLEqHmV+2DjxxP1m6c44qlt1x1mLT/Yz3OXm66U9E0EYzGo+5ceMGxoQkOPPZHIByNMSpcNzRZIyIYlCM8JLhrCdTpi1dLRIiYInjBcLz2eRi+16oB/z6cxK6acEiAupiruT+b2WTVtrNTtvhYcedA2l3kPX0Z4iJqDjRIU5gEEeMjwdhfZwDAawZl01hfcK6Ye5oirYMuHg0Cmsrrl69ztHRESKe8XjIN33TN/H8p1/k5qPXODw8oGkcn3r2OT7zmc9y48ZNRuMJAsxmU+o6byP+ySCsVhVnZycs53MaG3lXramcZeUqyjitOBglR7VaMpvNWMxnrOoaeSkPkx4mE5RRrVwtZUGzNg7qYz6KZIy1MXjr2gi/S3I61R2jjMUqp9Mp5XAYSsfPQw24xjuMEhSawaCgmufMl/OoVuiUBMT0ix6itxqeY+Juu/vO2igj5HBYT1SfPm8+v37dtv72/f2MMa/6AoM77PCHwZ2TmPdlRuoyDyXMUPJr6xTJhUrSpGSi+/KhO2o813i/3j7p/6bB+4YXbz0fZWShXtpv/uZv8sQTr+NjH/s37TD9y974Rq5ff5Qnn3ySa9euh0Qu8dz9oXxRFDSNZbVYcHT0IpPJAYOyjMNxoakqFrbBmsApN1VFtVgAYK1jUc2wWlM5hxchL/K22rD3nizLMNrgBBpnMXF6bpLBKfP/s/dmv7ak513/5x2qaq21pzP0cGzHdpx4IJYc+ecQy78fP0CxoiBHSUSUSMgR4gIhbkiEQMJMN1wkv38AKUhcREFEQmK6CAgFhQhiA8kFBCcKJMGQgaS7T3efYU9rrRre4XfxvO9bVWvvM9ju7t19zn6s9tl7r7Vq1apa9dTzfp/v8/2a4hQBiT1iNFVdYaylamqWe/t8YG8PfGAYerquY9hsiH2grhu01iwWK94c+pxO82nKJ1Us4tPJiUwYITyadzvFaafUr8JgmDx392a9e85BqH7Xle6TY7PZXPUuXMdbHI+vdLNyVFKpuvB4GohQ6TnlKclJovyayaGT2J1WK/EIill6ME1ZBYYgko19u0EpmfrSWtP3A1/72tfEKWK1T1UZfv8P/oCTkzOUUqw3G45u3OLo6FDEbYwqQwYPHtzHe0dlLbdv3cFYVWx/UOnm4iMOwTVBKF6b9ZrNZoMLkXq1xLkBNwziHDzR1o0xsqgbrNbEUi2LV1xtLS6E4jOnleCrItMbqaqaG0e36Luek4f38b5nvT4nxsjh0Q2CUgx9z95+I64W1uKCVJKF7kcgRk1I3N1paCgCOE9qcO02y6ane/pZy40twUTTG/N1wr0YTdNc4C8fHR1dNxyfsXg6Pd1Ef9oNY8woYj0b46UsWx9FYbj0okvJeT69lJ83memf0ZKMYLtJ91aabYr3ve9l7t17QBMsq1VFVdVJkUvT9y0PHzp0uvhXe3uF69o0DYvDGzRLQ9f19O02NaQi0XmaZkHXtsmFYmDotmzWZ7h2Sx8iZ6eWgGHRLNHGYGxFjNLc29vboxt6TDCpMWhKknIhFCEfrVWqeOVm1jRNgkE8Vls26y3gWa32UVGx3W5p25abN2+KcM9yRVVVDMmBOKZGmsyuxLHBNjv2+RTs2vSMrIrdplg5lxMI6AJmnzev5o/tJv3rgL/wF/4Ct27duurduI63OZ7KIy2EgGF0gZ1fdKpc1LmjDZBJvJddxLvvUV4iW9t5Rr5Qx82GiZuv1lpUvZLeQoYMTk5OuHPnJTablls3b3Pz5i1WeythFISArSoWTcPe3h7W2uLSsFgs8GFg2Hb4oRf34H6gqWtcP+B6kVKsjKXrO0Ln2Jyec/+NN6CyRKVolgsePLxPVdcsV3ulWWd0hbHJUDINTuRBkzyWodJQSYhRGm3pxlA3NYc3j+iHjg995CMcP3hA10klndXUTk9PuXXrFs1ihVeaqqrou4HgPdkhLZB1kMUGaLcRdhleW850OVezV6Sb4eXfH0gqZcFfoB5ex3U8j/F1eaRdXgUJ53Q0KExVrlJE5VGYOXv+EdtX06w6Yy2A6AnkikoT/agBYa2lri0y/aaLdToIv/bmrVt853d+Gu8DBwcHDN6z2WyKQIx3Dqs1N2/cwCea13a7IabEvj5fE2JIbr8yVmusxrmB6BzrkxOO799jvd0QNxFlLPViwc3FHhFwySI9fyYfI0PXE4wclxCC0Mby59VKlvlSHstxDJGqrlFaY6sK5wbqqiLGCtf3c4YCsL/aQwVN3weRULgkJ2olvGE53GOjTSt9oVolkmCPKUGXkmyn9N5ddoJ8VwBjCpvlWnfhOp7neCxlbDZZNMF1x6Vjvuh2qEcFz1WpUbODAe5UU5mXOzp27T5Hpt9Cea5MRGltSyUu8IArle6tW7eKa+/v/M7vcHJygvMy/JB5uXVdE0Kg6zrcMNC1Lev1Oe1mw+npKev1msVyya1bt1mslgAYq4lhQOFp2zUnx/eprOHo1k3qpmG7WbPdbNi2G9puy8nJcdJDsIBKQu8qcYBdabQR5HP4KMJCWklyz7Qz7xx1VbFoFgQfGJwrWhR5e3Vdp4ad5f0f+AAq8ZpzhCQ5JuyIUFTFUFIBh0togYLohKRWlrm+k+lApcqXaIrl5vOSvw4mjkJI1wn3Yty5c0f0QSbx5S9/+RrP3QmlFB/+8Ieveje+qXgqeOFRdKHLYdkJjhsTkyFq8gU7fV5OFrLtxB/NL9/ZeK6qClUtRoIXrdaqqnAJ0z042BdcdrFgs9ngQ2Bv74C+7/mD3/99Qow0TcO2adg/2Ge12kNrXRLg2dlp8VyrqopmuWRvtQKEprZZt0CgQtG3PeebtUAAVU29DHR9x/rsmOXeir69SWUrtts18hE99WoJRmOURilwrsd7nbjMRm48mU8bIcM3zjliFIF08Xyr8aGnaRYMwyCVeErAVV1xfHzCYrGg3W7RMbE1GFXXLmglKJUq3zhjGex2NadsiFlFHGECLhXthVm1PGGsXCeTeXzf930ft2/fnv3t+7//+0uv4joktNb8+T//5/mn//Sf8rWvfe2qd+cbisdXulAqx5wI5onzIgSRMUBVtnDx4tptyMTokWVrchTLOOKkmsqc3WzDHoEQPbmQq+umMA2UUty9e5e23aRmUsS5pEZWVUQf8Km6tEan6bGe4B2KiLWWGzdusre3x6pZlGpUEvOA7wdOj094cO+eCPsYS9UsWB0dYOqK9each/fvsT07JThxEj4+Pub+g/sE7xJP15BxF+9lYCJ6Pw4sKJko82mYIt/8soNEvVhQZ++2vYM0ZSeVfAzS5FwtVyP9LITZqZje00ryRBV4YV6NSqNM6fy68fljwp3/N23M5fHiXTjpWmnsOr6RsNbyoz/6o3zyk5+86l35huLxE2kxLfoTpkdqlJUJM5USqJLkOm2ujcVq6vw/sbLRKGWThoAuDIUYYkruZUsEYnleBIytODw8Yn//gPV6zenpKXVd0zQNdVXjXI9zQ0nmWimWqxV102CMTZxRjdaGGzdvcnBwQFVVLJcrFNC1W7puS9tvUBr8MLA+P+Phw/vYSjQZ6mbBYrXP4c2bLFYrtuszTo5PODs9ZX22pm23bNbnnJ0c03ctw+BR2qJ0jbGjlm8sCTak/+Is4S4SrWjoe6ytsHXFZrtls92UIY+qFtGfrm8vQkIxD1irgsPL6ZPjGvNSI8G8MeskQ3GyVFEVo9JHxzzBXtYfuK52JT7+8Y9zdHR01bvxro2///f//oXvSl3XfN/3fd8V7dE3F49PuiDjvJMLpEgD5ipH52bapJLJEG+k8Fthjvld8k6zChgkOVpb0dQLmmbJYrlgsVwIfLBcslgusZVlsVjifaDvB5bLZZl6y8MIuZLUWomLxP4exlppnDnHInmM2cpS14KrqXRD6fuOmCbPgg+C/XYdp6fH2MqwOthn7/CA1f4ey9Ue+/uHrPb2QUG72bA+PxNrdIT+1rUt52cndN16nHhLFXokluXkeIwmuHqUxX1VVdS1DHJYW7NYLgq1DEAlzDpj3Pn1+YRnmCGfIznPFFqgrCLiCBnkRpsA67OKuVSxOYFPb7yXQES7Nj/XAZ/+9KevqWKPiS996UvPFMXwySpj6d8Lo7qzzCrODuNjXjrwCJZbKuLJtqbYYUkoQSCDkiiyEEzil8YQJ0vSSNMsqJuaoW9pWxH4zhXucrlk/2CfqlokuUdJAgcHosi13qyJMXL71i2qSswe9/f3McZgrfiLnZ1tcL0rUo4KjR8GNpsN3TCwf3TEav+Aqm5QMWKNYZsgDdd3tNs1wR0WyCLGKEk8VesyPbYAbFk9KCgVK4wQjDGGxWJB37Y0TcPe3grnBvpty2LRcOfOHdrNFuccwTMmYDVhnMgan+yXlokL8r6T86vK/5UVh8pPniZKNXndDLoYmSdaKUI6b3Gy4plNtT3H8alPfYo7d+5c+Pvf/tt/my5ZM13HsxVPGAO+GGFe5qBEN5A4e0H2GkglkB7ZDPKyixqrOWnvNtucc0TnSiIwxuCCyDnuH+wRkl7B0dE+bhDr9eXeihtHR6z29qhsTdMsODg45PDwBoMb6AexvhFFLst2vUlQgzSinHOs1+ciEWmqcoPwzrHdbDk7O8NUNYvVPkpXVHWNTrxYU1uMKH/z8PiEoe8Yhp623bJa7UnOioF2u8YaK44VjcZakxJgSoxJ9MZ7h3MCNehUFW+Pj9lsNtS1UMaccxDTsfEOnxJkZhmMzZg4OXvFmOfCeZYEnel6U/A37V/qZRYq2WS7F2GE9JwQymhcTv5cNqTxnMVHPvIRbt68eeHv//Af/sNrbYpnNJ7YybiMtZDj0boJplRACj0Csikua75dtm2llNCcohiCayUc1hBkid11LTFG9vaEsdDUDdYahqGHVEkuVws+8C0f4ODgkM3mPNmZO46ODqmqivPzc0IMLJcN3vdst2tOT44FMzWGuqkSRCKTXH3XsdluqRYLbNOglRDjjBGcxdYLlvsH3HjpRfb2V7ihT+7ErXyuJFy+aBYYrRi6Fu96nPf4CD5VoKEkXU/fb1MjUPQbDg8FwmiaBQeHhyyXe/TdwMnpGT6ANkZGLHxIgxFpm1MYgBGzR4m2bq6GVSRp8O7QBseTUxJopguq3ceUQgzYBHfP/YC8ctm5S1/HJP7iX/yLnJ2dXfVuvKvih3/4h696F96yeKK0I4zKYJc9Nv39UXjerprC9DEZd00No0lmzs0klTMAuYEn5o7OOdq2S/snHfyDw0PquoLoabsWpRTvf//7iTHQdcKdhcjLL9/h6OgG5+fneO/Fqt172rZls1kXXDj/672InA9uYNO2MlFWWdFV0Iqu7TDa0DQ1xmgWqxV7+wfcun07qYu5lECFAaGN6Fb0fY9WMKSJNZkSk0ZWTrwg3VprTRHmOTw64sUXX2K5XLLYk4agrWqaxUKgEKtk8i5tS6CRnTwXshElBSvWKSHqyXl/ZOw06C5E+SoolJGhEibfER8DPOfshe/+7u/mYx/72IW//+Iv/mLx/rsOiX/1r/7VVe/CWxaPhRdmfM2dmEEDebmYlp7ja4R/q7BEFWZV0/Q9drbMvAIaW+cyGVbTOwdRYwwJ61zSLJbUVYXWhhADt2/f5KWX38frr9/j6Gifk5Nj9veP+OAHP4T3ntdfv0sIUUTAtWIYOqw1HBwciLYC4EOgrir6vi+6upvNhlVqvBljqOuGbrvFB4fSSyoridcNA6vDQ2KMDH3HMgoG2ywburYv1kLOiRllXdfi4GAMzAYVJGmGgmdL4rJVRbNYsjl+QIgCtaDEnHKxt8/Z2RnOOXwcG5mlaTbhX0dItusQE2ZbVhlpWk7kKCd2SqWpOj1NavaHsVEW0xuktB+i0LaNnmzx+YybN2+yv79/1btxHe9wPHE4Ig8OXBaScM3k592qVri3oxftWBHPx4lHCENcgSeVsoqIE6Khahq8dwzOIzzWQNtuWK8blqt9rK24efMmH/iWD/HGm28Qg+KD3/JBDg72OTy8SV01PHjwgOPjB9L1XyzZbluqSpwWVqs9aV4oTd2I8th51+G9Z7vdcu/ePUlK1hQ5xqAiWM3gHd4PLBciNmO0oWpqlIZ2s8VoK46/XY9RlmgMbugAzWIh2gnGGPwwCC3NDTjXC1PE6MIOEUZFLxWsrYhRs1ztE8j4b8/56Zq27ajrmq0fxnM5vYHq3BOL6IkzZUJzy/nKeLBC3CLyVFveZtZykEXIoxkJIaQJOC3fGJ2mDK/jOp63+LrhhQvLyWmf5QIdLMywv2lHfhz/VZdtakYviyh0Er0OIWCTSAyQYIGeN998k4PDQ1586Q7Hx8fcvHGT7/iO72C5XIoT8P4+bdtydnZCVVXE6PHeFRWzpqmxSc/WOdG07bqOthUfsvV6zWKxYG9/n2bRYK1Nx0ZgBYDBO4ahxzmRnazriqpecXh0E43Y9PR9jzGKthU7oMrYciz6fsAHx9D3DEOLcy1dt6FLiT8Ej1LCTMjH5uDggGZRc/PmLXxUrFb7LFcrPvihDxVZyd3zmQV2okoEwOnfE5TzaDrXfGDFZA2JGC+c493vjbBRMg1Ng3l+K93Pfe5z/PE//scv/P1P/+k/zauvvnoFe/Tei8PDQ/76X//rfPGLX7zqXfm64qmkHY0xM4xpxrGcYHvzJgnEVE4pRfLtImkfWFQRVpkkBa2ZdGvS/+vkG6ZwIRKCLxW2UoGmaagqw2c+8xn+5//8n2y3HX/iT/y/DMPAf/tvv8be3h4vvPACx8cnbDZnBD/gXURpabhZW0HUOOeLPq3WlpDkIrfbLdbaknSzzGJeescYscndF7IWAmkYQ7NYHRBcS99ucX1HZbTgyLUt7IzaGvq+o64bmZSLHq0iVV2hokabKk2lZVsfWC6XVFXF/TfvpXFoODy6ydC3uLDh4YP76QZl8VnXIVetKjXLpAtISPhuhhnUBZiIAjfkhlqMkagV0Xu00rMqN8NSF8TM86CF0sTwfGsw1HU9ajWn+PznP89//I//8ZpK94i4ffs29+/fL79nCuj+/j5/82/+TQB+/dd/nV/4hV+4ql18qnhipZspR7tiJmOMxpRTJ9v5hiIhyCjtcrlXpAhhrGjVTtWVOUmaiMmczpREjLUYK4lnb2+fT/9f38Wrr97l85//PEdHB3zlK1/m9dfvcnZ2ytnZKcPQ0/dS5Q5Dx/n5Gev1GefnpwxDR91YqsoW2ccQHN45hr7l+Fg81YS/azFGIU47kjBlAEHYFJJshWObxWfquqGqa6pFjVKRtt3Qdxva7Vlannu6rsf7gb7v8G5I4vAGzQKl5L5ojKGqa0Kq+nPTbLlcslgsUBhu37rFMAzs76/oupbbt1+UG5X3xCSOUxTJEObCyDMhTZklvDfBBZfxqyFh8bvP1yOumw0256+//IZ9HRJnZ2fXCfcxcXx8fOnflVIsFgsWiwWf/exn+fznP/8O79nXF0+odCXxxcnAwjSy+Iw2THjz82pXlqOZiaBp2xatNS+++KKYLCY7knzhjvyHtFGliVHUyrSJQEBj0Kbhox/7JC+99DIvvvgCIcCv/dqv8b73vY9PfOJjvP7669y4cYMXXniB1157hZPjh1SVTQk+d+zF7XcYWpp6T7ixMRKDJwbH2dmZ+JEtHKvVfoEU5GYUExvBMAw6JVmdml4iEWlSU6yqG5z3NCvF9vyc9XrN/uEhMXqc72lqg/dDSty5yaUIMVnaeNDaopVUSMLLlQO8Wq3o+47gPa+9/jp9L7S3O+/7ACfHD+l7R20rGAa882XgJFeukWnDNJa+V4zZsl2qa59cLfLr883FakNU4DNZW4GKQUaFy3dGp/9Xk2mbp/6OXsd1PHUopfiTf/JP0nUd/+k//aer3p1L4ymSLsnO5hI34BhRE5Km0gpiZjRMnycVUcjirsCDBw8YhoGbN2/SNA1nZ2d0XZvonQq08Eu1FbqTVqJWppTlgx/+CC++9C10g6dtt/zGb/wGL7xwixdffBGtNevNmoODA5bLpShwWRkVlipVF20HpQ2r1Yqzs7MJtYuE93refPNNtNYcHOyxWh2IELnRY0d/UvnLY6ZgmdmIUimpMAX3tcQI/bYleofzPbXWeNdjrU54rynbmVaFIfgkgDNgrRWTzvTY+fk5r736Clorlsslbatouy33H7zOpz/9nbz2yiu8+od/yGpvNdvnaUOTfIyBkNqjeQrNez/Xw0gDKQoKLS2ff7l57pIEL45gPM/Qwnd/93fzPd/zPbO/fe/3fi//5b/8lyvao2cv3s3fryeyF4DR+nynOy2jnI5i9xui4LIEiotY8hUL+LK0FVUxWYKfp8rPWsudO++nqmq22zXr9Tot3zWmshweHfLyy+/D6gUPjx/w6it/xJtvvsnNo1vsH+7hnKNplty8eZvjk3s09ZKDgwPeeOMNtIqsVktkYCI1l5SwAhaLBX0/sFmfs16fAxS2RgiBxWJBCLBNBpSr1aIcl5wcs/JX1sidjjhrbZOFUCM4aYCmNrTrjeDJQVFXFSDJPFPonHOSXGOWvtQJG7elkr5//z4PHz5ku1mzv7+H90PS6zXcu/cG+IHf+K//lT4EPvxt30oIgbuv3SV6uXlIc0slfY08jJIq3JClN3eseZQipGERtBLtjR3sV07y9Hs0jm6DKjzg8JzSFy5LCNewwvMTT3SOyEIxl1W6Y+WqC/1HESYdmSRLFUYPrfnrffq7SZXlGzjnMdqwv3/Ai7dfomkWtEPPyekJv/e/f5eDgyNOTk5BKZbLFWhJeKvVir29PTabDU295PDwkM1mw/7+Pn23nbxXWuoajTaiTWttLInWJ4aB4LF1uVFYa1ksFoXPn6128r/Tjn5+fNrJt7YWB2BT0w0ti8UC53rqusJHR/Qa5RzGVDPsUynSjc1jTSWuEyEweCfSk6sVxw/vS0Mt4eonJw/l3ChYLBfcuXmDu3fv0vc9e6s9bt+8BUTatmUYXJKslCGVupax5+B8YiSEcgPJ1XFO/MI4ywZDcUSV5vIM6d80dKwgKtGeUNo87uv3TEb+vkyj67rrpPsUsbe391TPy9fBuzWekHSnvmjj36cVr1KJoZCaYGXqSeUfdPrX72w7YYj5kk2dea0CSsF2e0rXrRFvryjaCNry5ut3CVFRL5b0/UCzkCGG09NTbt++zfn5hmVSIjs9PWWxWDD0qjS6Smc9fZ6+69PnCeWzWqvLTSbfeAR6cNT1ohyD3cSaT7T3HmNN6dBnyEAZjYoBWx0QfaR3fZobkHTVdRux9lFisImvAMGOYwwE67BVhTGWqlklBwhS5Wo5Pj6m77ecnj5kGDwRhY+BN954E9Q4On337mvjF1NpotIYrVHR49ww+1y7AzIxRoYg0M9M3jFl3Aud2ZhgqNJkE/gBsqvI8xNaa77ru77rgiThF7/4RX75l3/5ivbqvROnp6dPfI73nq985Sv85//8n9+BPfrG4ql4uo/TRki/zH8fn5X+FTrVo9jwBb9M1eiY6LL5pEmuDmds1hs2W3HotbZGF76sLthsVQlGrLWm62RsN/uQDcNAn1gKmX6V9gIgLfFNGpZYJcaCKHzlm820M6+1TmO6trg3gNwIQhpjFnUtRWUbrK3Q1qIrizKGyjYYXSUXCItNXmJ919K25wxDmyQpG+q6YdEsWSxW1HVD0yxQ2rJYrahszWq5hzGW5V7N//25/4eIZnCuMC/ycQJGNobVWKtAebSeYN6XhDFGVMNE6Yaox/MtGmyX9Me0QltN1BMMOURUjDNY4nmIj370o3z/93//Ve/GMxvDMPArv/Ir7/ob2FN7pD1qJFgET8aKUDGvAGUDyZtrh26mVC6JVbIFH1+r9Kg/kOlZbhiEp5te65yj24qc4csvv8zZ2Rl937NaLYlRlUTpvC/L6PL+ijIIkRtFeZ/rumK5XBSpx5yg8n+X32Ak8o1CdHoiLngCGqUtWlusbagr0QduGknCVdWwaPalUagNpqqoq0WBLbSWhF3ZJWDKPpls8a7Enj0PWJyvz/nqr/83nHcYay7ap4PQvZQqjAKVR4wjF9S/iuhRquRNvuHEHQw3ymcukEJK0D76kd1AStAB9HNU6BojI+a7cffuXc7Pz69gj95b8YlPfOKJz/md3/kdfumXfukd2JtvLp6sp1u63RdFxi9PPHFe0D5JGCVfrWhUGu0t1VBpuIRkuSP/CkVNkure3p5Yjfd9quYim806TXcNaKXQCuJ0fBVR4Kpri/MdRlfEqIqFe37WcrksTbIyfZUfzctvY6QZlSAKociNn0vFCEnsxlY6JW3NMAw0zUqaaSoPH2gEWha9BWNy7WgIUY6DweAze0AprFZYDfv7+2Il5D3HDx6y3W546cWXeP3114hBHJAzFzoCQYlGbvkcUY65Tk3G6enK8ICHbBIyDsDEicZxiJPni9GohmKAmc9pVKCsIfpnP+saY7hz5w537tzhB37gBy48/nf+zt/hF3/xF6+vLoLmAAAgAElEQVRgz95b8d//+3+/6l14y+KxSVcSoPw8nUjKUcY+L0ms42OxcDsLRXNadRUIQ0RetJaZfJ27MkoTAlKpescQInXCf21Khi+88IKwFLThxo0jXrv7Ci/cfoG+F3zSpgSWBz26rhOLnv190ZbwgUVTzwTSjTFst9sCTWRmQZ5Gy59D68Rd1RotpR8xkmx3As5FlBpQ1sggRrXAJmdg0nLeeY9O7ITgO2FEKKFgKUySRRwnyvKx01oTgysQR1VXDMOAtRV1mnjzLqAjpUKPId0MJvoKGZ8lJ0Q1PzfyHEVAuLomjiuRGaNFCcavVJKRLEczn3dkVZRf8+5l9XzTobXmwx/+MIeHh/zZP/tnL33O1772NV577bV3eM/ee/E93/M9T6SAbbdb3nzzzXdoj765eILK2IjpZuvw3YsxL1fLdaemiEWufJh15HdjOoU2ymrP8eTgxZpcGaF6yQiuJL/tdpsaYBVd3+H9wHqTZBuNpa4qrJVBhTx1ljFYwY4pVawkVoUxkqSqqpoNRUDmLfvU+Bs/r1IarQx5xDmEfAyFilWlyboQDcZYXBppBuj7NlXUPnm2SeNJG4MKAe+zaaTCe9A6D5xk2yRF3TR0fZ+aYSrdiLTQ9jK0k0evJ+cwU7es0gIBkbnBYYSWEBKg1pqYmnRT7HeK75cR4vG0FmaDQBqzYvmZC6UUn/rUpx6ZbAF+67d+iy996Uvv+pHVd0P8m3/zbx4vMwq8+uqrfPnLX36H9uibi6fSXpAL9uLfS5Wal9RjriRPbeXRUKESTZafaroMlxdKZRzK71oblDEMzuFCxHshJnknPmX7B4cYYzg5OUk2PTXnZ2sqW+OdxzsnoHUlPmmRkHRWzGiAiUAPfd+VBJaHD7Km7kyzQElVrrXADRlakPJ2Qp1TeRIv3TSCx3stFWIwuDQ+ncdzMydXRN8pmHMkpPeUm0yMuZmXWldKqti6rtlsNsQYOT05FQaFVrzw0h3uvvqKrDjy2G46FzFObqz5TOQBByhiRTKllqCHGNHGUKx31CSB56fnfJuq56JopijPlZe++/V0X375Zd7//vd/Xa/RWl8KJUzjn//zf86//tf/+pvZtet4j8YThiNynpRL8mITbJZn03JVzRJvAg5LNRxL4lUo7WevHv/VoCxKW1BKBFsSrmytJQJt20IUpaG+79Fas7+/z/HDh+zvHTEMLlWsstzVCQKQ6z7pDyhdEqiI39QFRpEQ7moWOo9RxoSNluQpn8Oj4oj1yrCEA0z5TCF4lI+4GInGE4JLmK0VuyGVOMMKsYFPNwOtkrbujMlhSxWulLmEdqXwrqdrN1hruHnzFm/cfZ0wdOWkqnyuUqbNMEMkopUmxCBTZUoxMrEneEFewZBPd77pyF8zdkw65yF9LYQqloeL391x+/ZtPv7xj/Nt3/ZtfPSjH31Lt/3Vr371XTui+m6LH//xHy8rzWclnurTzIgIOzSyIlJTRklVWWLmCxCmEMN0y5JU5mFGSlYyM5ySnYP3EGMRFs8VqzABLFVdi+pXlMpsajeklUmVNLNxW5mQg8WiSZivDC9EItt2TWUXSdhGKFMxVbVaS6soxlBUxmLUaZQ4wxUKCFKpJn3hiKdSNWCK0lRIFb5ODbsQAyZDFUqOdD7uu8v6jFWfn5/Ttl061oIte+cTde0SvD2fx8k5CcnRoSADSqARM95JS8rNxesE1Gc3NJN8nBL9u3lE8+joiM985jO89NJL/LE/9sfe8u1/9atf5W/9rb/Fv/23//Yt3/azGD/5kz/5/CXdKY67SxvLF7zRkjylATYmXZUaK7vKZNJUc8Swu7xMfF90avUwuZDlyvXOoyv5PVe8OXEppdJwgUATVbK4yfsvS+6RY5uTZX48J845JUymwkTPVqGC/F1cbl06hJkyp0rDzftBKusoeKgPHqMU2oiimc7L+DzpZUZjTqXAKBn91aYiD5rIe3icU+Uz5+PqnOPk5JgbR4cMwzhdd36+LjegPEV2MeVN3CJ8HLm4TJtsF74Z5dSMcEK8JPHKmPHkvvGujYODA/7Mn/kzfMd3fMfbsv3f/M3f5G/8jb/Bv/t3/+5t2f51vDfiCW7AYzdfqTj5mUSkr0qzR2tN9IHImKBHrQXwycGg0M3QaUYt6TSki9UHBUZJlacNcdoEUhRjRVAYrdlutywWYrOe6V15ui1f5NrkijRhoelGksd9cxIeBWfEF622Fd4FgvEsl8s0KisfyCuP8mCK1fxUR9ajtEJHmRizuhpZGdpOxghiElLXiQOrCGkJnsVyYpBxWdG58HivChaqtFTc3nliVHRDz+///v+m71uGYSBG2Gw2SSGMGaaaul1yvlIlGnL2nDTBVPr5suq03EbV5Bcl3Ahyg7Vk23mlDe+OitcYww/90A8BIhV6mWfZWxG//du/zU/8xE/wH/7Df3hbtn8d7514AqZrRngg+FQNupRUDdpYtNJpNFWL/1XiamqtMdaQSfg+VAmXFLUsotiUh9xsyQleifpXdj3IYjuJb4ROFaH3oraVNRNGjYSsY1ulqjkJ7UwaXLkrnxkIU6ihaRoGJ/hnCAGrKxRSURtjMEoaXpmOZerJYIiSilZrlexodBrAcCyXMj7snYh+ayI+DAkeMYA8N+pxrivjwzatJJQesfX8GVTi7QYvxpZvvP6aOB+vT8VQ0w3F7DJXy5FxODs3u0qDbQrdppj+nif5ynZ08f0pGHBUSTQnJfOo9KzIfTck2x/7sR8DZP+//du//W17n1dffZW//Jf/Mvfv3+dXf/VX37b3eRbjH//jf/xUegv379/n3//7f/8O7NFbE4+HF7QWo8QYaZLweIwVSkuVaZKgeFOL/XlwQ9FZlZfLpa21iJ24YUiDDrKs72NLdBGUSU22IAnX2FJ9OicXtEGWvLmhVuxrokpVnQi4HBwc0XU9IEI9MTiWxqSmmRIWwyRpFUw10ciGYZCGktZ4F6gXkgxRitXekqF3BV+Oei7qHmJI2mqjgaQk1ZFXq7VIODZNVfYpy0IWCCStHASiiKKrG03SEbYzKh/pszx8+JA37r5Gu9nygQ98gLuv/SGKSPChVPZT1ohSKpssj0kwc7nY+XuMJdlmGGImPTlhh43pVJK8RuEnsMP0u3EZffCdireros1xfn7OF77wBdq2vZZs/Abje7/3e58Kz91ut7zyyivvwB69NfHYT9QslmUM11pL0yxKkybTnJaLFcbK8nnoOpTKRoiC5+bXa22wVipS55ws5a0ngIjLBEAZrK1K0p1rvkpCU0qjjMZoK64QvePoxmFaFU8qTu+RnrvCuYAx8ybgVDth1KwVStlqtaJt28Ttlb+bSnQctJaKeIRO4phQ1ahrMCYoyDbuWjUCE0SXYBhpsAFlGi4nspyI0ZNBgrT0F5hiFB53zjEMXSLa++Kh5mKYYaxj00tWJShp3GWebqag7SbDSMLqQyysjby9AiHE8W+pvSrHbuc7NUvW77H40pe+9NTTY845fvM3f/Nt3qNnN37+53+eF1544ap3422JxybdxWKVKhxJVHurQ5TR9P0gOKqtqCqZrhqGDj9xFciJaJq0VdIIEFeEQJ8q1BiDLEG1wdg6UaJG80qljDSUoheFqxjRwXN8/IAbN6R67bqOumnwfqBttzJooFOVHDxT94ssdKO0NL5UWgtn+tVms8Y7UI3Cx2FC3fLivpAoZ7IUlypYoIDEUggK2WhM+OuYTPu+p26EN2xtVfYnJ1BRFItpAANU1BirRv4uAa2yrbpO22x5/Y3XOT87pW23/J//83+obA0uYqtK6u5cnWZiX2JAlCnDafItjBNhLeSGZjQjVquUIvrEw9aZvTLHbjMzImPQu4M17/b4uZ/7Of7e3/t75fe7d++yXq+vboeeo/jkJz/5zLEWcjz2Uy2XS/KwgFKKw4NbRCVJtKkXaTy3pe86urbF9X1JuN772VBB3/flgnPO0fWtDDD4gCdgqopFZYu7bJleSktyIeSDDWk5bg1939O2W7quZblc0XXdZOmLiMwkSKFoFZQTGUt1nEdjgaRnq0SHd2jlcRXHVXfRcJDps0zfGqGECUVNi9tFDGCsFYWxBLlUlSH4iItuhjNPk5IxtgxfqFRJG20wWYgdcSzetlvW61OsFfra2dkZm03LaikOFmIh5GbbLnDDJRXpjFIGIwySjlJZgUg2H8eBpX4v28zbysdnl274bo5/+S//JX/lr/yVp5ITvI7r+HriiY00rTTBC1vBucBi1RTqUdttOT87ZWjbZI8ulKvMq50ul/N/4gYx4PxA5wL4AEZL0ymA9lJJxhiSfsFUONtDWnavVqtSdWw2G5bLParkHyZVYkCnxpTCoHXEWE3Eo5RGG0lgMvXlsbbGGltW4y64NCac9A5iLPujNZgyLBtnnxFGoeqmaVBKU1V1GR02JunsGktVGZzrC5QgTcFR21cp8VzLjTaUJmKISpXz4r3nj/7oD/ja//otTk8fJAt4xwsv3GZ99jDhsYknEsLspgCZjSYwSIYJ5glSqumsEibNO+RYkGAHGV8bhy6g8HoVFyGGd3P86q/+Kl/4whfo+774913HOxtf+cpX+MhHPnLVu/G2xeNFzENkSB32EAKb7RpSswcF52cndNs1wUtzKfuGwWg/ki/g3OyS57nUsJJlftYEyEnJpySPUsnZQZbjVWXRyDK4rutU6bZs1mcslnvcSEk3hEBVG2LUaakuU1beBXQSvgkeFDFNrJmUcGOBDlQAH32BIiBV30aSXkxW60mRoHzOXJnnat/asUGW4RZTWSHKaYWthepGOXapiVcwXnlfwboztU6OwRAC63VbVhTHxyf4oaPvWz7wgQ+x3pwx+A5jNM6N1X6BGdLyP+a8OaMppPM46ZLlSbOoYpnumyLFOXErFF6PThKPGvd9N1S8p6envPzyy+X3EAJ931/hHl3H4eHhE7UWcty7d4+f/dmffXt36C2OJ6qMTW3VjXFs23VphHXtluj6kkx3/4V54yQn5ayhMEqrJPzQqEnCmi+3ReYwYpcVzvmSmEIYWK/X3Lw9JntjVREfN0oXKUStTRrkGO10ckWZbRBiGiQgOJTJzTypUuu6JtPlAhGrTRK4EYaCnuCkeSgjwyy58RZCwE6S85SxMI9cPYveQggDKlRp+EKVZb4bBn77t34L73peeulF7r/5Ol235cHxCV3vCGFTqvCL75EpXZNfIAmMJ9bJDoWsnLFMQSs/z3Z9Qj17dGK96mZa13XcunVrpt52He+tyNfYeykem3Tbti14ozjr9gxeaGHr9ZowOHTiHeVKbppcpxfVMAzzx9Plm5/T+cTfRWOtkRFbNUo4hpgTbSz7lHHUvK/OSTWc8cuc7CUBz5fVOTJHN8SRPaCVhoShajsKhiulS7Up+gtg7GRkdgIv5KW8mUyaxRjF6DK5SIyVsSuVaH4u5NcmF97iQBFxLmlBOM9me85yteC117a8cfcu280xMSoePnzIzaN9Th4+RBNnDsOzCnM62FD+pOYshUIpU2MFPPmsOeL0+VGw65hm20b22SX7cIXxXrtgn/X4er8XV33j/kbisUk3c2611mw2G/b29ooW7TD0JemOUoZjdbubdNEyOeW8S+vQiI/j80kNr+12k/Rga6oq45lgtHB5jRl5rU0jWgmr1arspw8OQ03XdVKdJoaA8HHjbJmdYZMYI1VlisKW0qokzUDAKpu2PxEXLw3GEcfM2516pSnl0/HxkMTLbdXgXJhM0TUMoS0OwCN/Nw1vqPw+JHzb4J3DDT1vvPEGxw8fcL5eMyRRm5BGlg8ODjg9OcEkCp1CuLlx9r0es25JoBERGNcKJtS9qUg5TBgRatxWwYwJAtUERSYEj6sWM/u+XMd15PjlX/5lvvM7v/OpnntycsI/+Af/4G3eo7c+nph0R1GYyHq9JipwXQ8+Qgz4MB0Rlpgm3dzF98Hjoy8i2UZprJfpJY94io2DFSYlkNGxoW4W+MyMSE2cqqqSq66jH1q27bowLnLzrV40GG0YBldoYzlp5GQiyVIwWlk1G4wWOlRtFhgzbk/2XQY4gFl1ugtdymcJs5vPFIKY4t/zYYE0WJDxZBXQqsLoCq00LgZUUkZ7cO8ev/t7X8O5jhhcqtwie8s9GWCZ6Olm+CSiRtnGaZmbBymUAju36JlWIOMqYmy2kbgNcqMw5bNPj8mURnidcK/jm433Kvb+RI+03JgKIbBpt5L4gicMfcJUA8MwJIL+QN/3M6qY1rpUtCbRvjKOCRQa0m5kTNQ5wTvrqsGY0QDSWkuXtBO893TbLX4Y9zVb7Ii9tTj55iQ7tUzP+yVNIo3RaXzYZJcILmmUpemwxFKw1mCMsA4iYVbxkz5lHpvt3YCxiaY1qYjHCnxKQ5Oq2pgkS6k0QQkLI/hA27dEHB/+4IeSoE36fCj29le07aYk94LnJjrtLlZbHhonhss5nC/5VGo2jsMl0y1pPSujL0Q+P/mYX8d1fCOx2Wz46Z/+6avejW8oHlvpTpOnCGzD4HogFs+rXM1OK5cpPul3Lu1KKVwIKKMF7YuKmB0K0oUYkviLtVXCd6Vayz/rhB/v7e1BSihKwXa7pl40LFcrSXhBNBhICHJlG6qqLhXtbnNpmoxjCCirMFoq2KlLrlj0qKQDMVanamzfp+eJ/GNIVbypJFG1bYs1NSHkCTaNikJCCzEI/9eYQsKCPMaMTMQFGTler9e0Xct6vZaJwXpB13swcPf11znaX+KDK/BPpoddaHwxoYLFzCbJgw27zx0dK6afe16l50GJcdsw3oTkmL+3BiWu4+2PF198kaZprno33vZ4Inuh0LxiwGqD0rEsOaedwylXNVeySk2XseC8p5o0VFARlcS7dbrYXeKU5qVofg9JhmMzKPuCKaVoEqvAaE3ftvTLJRqFXS4haSFoxaQhBhBKkoeRWzv6wEmiyxQzrUXCMnfvVdI7zLmm0Kmi2P/IdJk02xSCjeooAuhA4h2LKLnSiqh9SorJATg19ApXN400kya/3CBTeGenp2kYYsN2e04IjuVqn6aSasCYCoJoEKNU0iiW/cv7vRtCC5ucp/S8keebjUpj+nm6LUm6+eY15Wzn7WVq3OOYDdfx/MXP/MzP8NnPfvaJz4sxcv/+/Xdgj96eeKo5O9EkULjgMZc4uE4x0pwkC3bpQ1GjMlompgyREBXRRbwaCDEWmEFrjV5UEwaAwg0DKEVlbXGZresaGEdoMxcYJRoQlR6FbHIDLEwU0GJEFNKS/i2MFXpOwColugw9yJI6jNBDHK3lheObDBuD0Lycd2ijMNoSvaf3W/RihVYU9+KyKkiwQowRTfZaE682KbUF8ohKE5XCOc+DBw84Pz9ls1nTrjdp2V7Rth3gUMqyXC7YbjfEpBlRKsw4nrvppFhRqpw8nn3eBFZQ5dzIjXYc5BhDiQW9Gt8v35TltRFFvvldx3V8fTEMAz/zMz9z1bvxDcdjMd3CZU3VavbImtLCdiuanIAzTjnl6YKsWr0S5oJQsGR2KYjdQnlt13W4pFrmvMg3RkZcVTzRGqqqoqrrgvNqBdE76rqacHGz2tlIG5PnVzO+7PTnDD2MTIeKyMj/FR1aScsZ+5Zpu4RVRw/IIEgIAecdIbr0uXpCGGbDJMLSEI3igJIkbITJoFPSJVW/MSj63nF6+hDneurKpBHgvHy3AkokxbY8fDK9OV5+whOkMAHaR/hFlXN9MZIDcpxs+3EJNc7hhuu4jifF7/7u75aff+/3fu8K9+Sbj6ewYB+XzVopotYi0ZiSrN+hiWm1c3HqPHU2UsNgXHb74GfPn8IV2cFXoYRu5jxNU5eleU6eMS3bMycWPNvNGUbP3XyzpKQxVRmvVSpeaOyMjIZcCUrpZyYd/5hGYjPXWG4aEJWsCrQWkXbBUjXaJEdlBohirjltmgmvNULSziU11STZQRI6SEMHga5r6botp2enbLZbnOvT8XMs6pqhFyZB27azm+PuzzlmegmTJtoI7SQKWP5MagojpNfJpEQ5RvIe0ogkTvnVRmQ5r0vd60jx2c9+9rEGoD/3cz/Hj/7ojxJC4F/8i3/xDu7ZWx9PhBfy8jJ7g+Vkm8d+fQjlIs1WPWqaeLVMODlyQ250Fg4TXDFfkDFG0RYQaTMAcW9QA0orujYwJDx3GIZSKeeluFKKdtvhHcSouHHjBsMgqmhSueaKd9xHredNtDHpSioKzhO1ScI5whfOOgS5ioRI9IGAwADEiAqBaET7oU5Vq9Kyn95FVKqE5eaS3jvhpjondg2kalcafCKEfn5+xnazwRrLomnw3YLg13IjspZNDKBgGKY+dWOSLW2vmAQwd5JwjnmyHv+286zJ83e/QOPUmtj2jDe267iOHD/xEz/BZz7zmcc+55/9s3/2Du3N2xtP4OmKoHdOVpcNPUi1ly7s1DjTBd9Ltt9KFfW/GKNYo++MpE6nuZSSytZrT0iuO1MR7TxlZq3FDQNVVRVBGUneUi33naHrmqJ/kFONuFfoCSNh5BNnSlkZB0b0GmIIIpgTYqr8I1GPNurRx8Q+iOhAafpZOyYXrSx57na0E4r4IEI3UnmPZpxJP34qrUMEvHOcnh5z796brM/P2W625RhmWCSvLpQSuGEYss7xpBJFGphKdi4J3swbbNMG6fRcZdhmmqxLxcskDavMTyuzaeU9ruM6niZ+5Vd+5ZmCoh5PGQsuVZ9j91mFOKl8VCE1lYs5xskVF8dEO2vcCIxg9UgRm17YGQNVg/ivmdTI2eXf5tfUdT2hrmXtXMvgOrbtOYtFU4YtYEyu+b1y9VsaaCnxkit3rUqlWZbVO6R/ee95VSlV+LzyVyqmceWa4CPGkMwg842LfETlfYoeLzDBzrfbjYj4BBEwd8N4PLI/mlDFZB9nlWXqo+V8SLLXUXGuhTFNqLsCJLvVc3n+zt/k7caG2wjXXMd1PF380i/90lXvwlsaT5B2lAvLOSecUa0KtWv+XxDql9bFAma8MyXKWYiluhXeLeKppudL2FzReudQKvF6PUVwPCfDTP8yZoo7KqwVlkHeVtd2bDZrQojs7e2JbxtJGMfM8c2pHu68iShUKykNJTFqrYh+PEa7komSrHWBHxRZOD0zJQaMqWYTf9mE0xgx1yw3ohjF6CEdy2EY6LoWowX3ripL8AMhyGfZbrcp6Y6NyQvnNv9/SuYlWT4COtjFgS9WHok+dwlqMJq1K9HQuIYYrmMSf+7P/bmnooo9K/HEiTSVOuZGJ+PJ3S54FAghl07zikglzuqksRaTdY8xoMZt7V6A+X2DD2Ln4+V9gvPEEEQDwVY0zVI+iL5I/yKCHxzn5+dst2sZRU6cY1QSOlcXE8CU3qRUEg3XhsjocpwpXbs4cPn8ilGAXAk9Le5gmrkhaLQkIx9EyyIGx8wFmXyPkkbgen3OvXtv8uDBAzbn50lnQmhqxlRJXMiQ51V24aD8vyymE1DS9Ns5B7kqlWNzCeaboJIYxp+ZLnTSL7m/FkIguzLv7td1PL/xp/7Un+LjH//4pY/9/M///DM3Mv54Pd04TmypEB/JAippIY7KYXMMlgsJKoQgS3Y9shYyTzWzJoTDKhe8jiLyErxHWcNiuUdV1QVayCPJMfrCVogxEpwnmAHvHJv1uWCcWmG1po8ypFDX9axSnVbdWV0sAymSXE3SfpkLtcOIcecpu1xRDsOAyZU68yQ/DAO2rgje4YlobREt4Oy6m5b9qRoWl1+H0RqrBW/OfN8Mj+x+UXex2WKhk28DITXKcpKcv7rgw9Pvxiyj5m9HlAYmCeKPISalsVJbA3NB9eu4jkfF//gf/+OZuzk/9ls/nZOHeVVYkg2JexsCzvuC3e0ut8fJLj0mhMwkCHOn2UxVyu+ldSQQiijO0Pes1+dsNhuc9wzDUCrHTCHTWtO2WzCKGDxu6Nhu1mzOzujbrQxcMA5XiHLaUCzdc0w/czaqLDeYONfyLF+OKDhmtkXPjT/nhKcrzx3lLmFUAYuJdTBtR+WkmG8EOcEKLziUzzGd/BpvbikJMk/08+ZYypAJaph+limMNMVxM6/64vMmhygoFAZTblnMtnXVF1Nd1/zCL/zCle7D8x5/6S/9JX74h3/4qnfjHY0nVroZL9WMF/Zc2FwnHzIZYw1AXtfmJBt9kKmznFzjmKg1Ug35CW+zJPQQiEooWFqP2gAqwtD3uKbn/OyMqqpomiZVeSJAk5tuef9UFMnDfhC3iawSpgBV18lGR7DpfKMROABiCIR8I/FBluI+zNwgpseMGIg+EI1BVxUQcG6Qql2NDhuZVWBMOkZKxoLHZJablULZ8wH63nH/zTd45ZU/5OTkhK5tZ8m2bVupWoGo50n2URjqpH1Wylx5rr7wmnxsghdjzHldnGGkkSEhIkDvzlBK8bnPfe6qd+O5jg9/+MO8733vu/Sxf/SP/tF7VknscfFUY8C5+jNaEtg00eRV5tSSPA8FVFqmobQVz7OinoUk2spYoREZjYpjMs+DCdK3ktcMw4BJtjohBIIbkhatjPv2fY8xhnbbc+PmIbYSLNXkJX5K4sPQs9mC94H9fWEuCLYrSTuXanmirrIZ8nB41wt7IgYRMS9shkllGDwKuVkExPqlrmt88MSQKt6hY7FaopTgsN4HqqopVfcYU0hGfhcjyjXWalaLhnZ9MpNKDOlGBnJzQk0ToTAi5HBcTMCip5AVziJlYCPF9GYkzC+xuR9fHwqUNN/uHFO+jut4mrh3796Vr4bejngsvDBd7oNUplN61xSDhfnFpbXwTYfg6ZNweaFWAS4GOpcm23aYA5LwxElCKmBKdZWX5ArYbrc8ePAApVTBdutazCCNzaaTwhjINjd4TxiGJPfo6fuuNK2ytY5PkEX+fAVHzT5vflwih0ugEcFAR1aDbDNpBaek5PpBlNvUyJSY4sJqJ0EqIjF6Qow8eHCfN16/y4oF7pEAABfLSURBVMOHD4oG8Qw6UNIUm9K/dmP2XuW1edhD9j8yaeYlHCl/9lGkYQ5VTPHwR73vSLF7thok13EdTxNP7GQ86sIYOZcXn1eUpVKjSilVcNecjI3WwmBgTDpT/uzYkIoMSSfXJOucEAJD0nWo69Fp11pLiI7tdk0IAWuMWJAbgzImJV+NSe+VtYLbtqVtN3SdjNO6VNECBWKok+XPtBE1vUlcgBjSeHFmdNRVLZq3UyujiQZBfn0W49k9HjHKUMfp6Rk+RFZ7eyyXTXHSKMmMxHQY228XqvHp3y5EzDoKkRhdek2uYC1S4hpg1NRNWyRLSE6hjGnynSbcx+7DdVzHMxyPhRf6fkw+hkklFSf8Vi86uzFNUpVkka5vHUkC3ONAxW6SkYwSMUolDFe4uJVOFjqATTSzIrQTIqEfePjwIV3XcXBwwIMHD7hz5w7eB4ZedGSXqxqFcHRjdETf4V1g6DYENwC+dP1BYIymadBKsVqtypK67Qfquk5Y8agIJjcBjx96gUMSe0BrK+wMxko5OyqoZPeTKWWiBaFnCWlspuXkpIQu5zqiCxw/PObs/AzXD5fitVOWQN6GvJdJf5P0LBW5mr+w3BBsoiDIAzH61OiLKKoEL+QKfdJES9+RvGnB+idcsjj+/Tqu47L46Z/+ac7Pz696N96WeGKlm7v6OSEUFa2kvQAZe02sg5CGPbPVeRbFKd300VAy05u0McVpd+QaRZx3xTIoV8sw57gSYxE+Pjw8JITASy+9xGq54uDgoFSLzrmUO8SNoa4NdZ2tc7wkTu8ozZ8YC5adm3JTKlt+zPvk+4YvVvQyykuBKvLnNslOHkhUudFfLn+mC5NfWqURXZUm0dacnDxktVywWi7LhN6jsK/dY533O2Zq1yQylEI5qyoTNUbIAFLydexaEU23k7cQYywmltPPlLd11XF0dMQrr7xy1bvxXMZf+2t/jb/7d//upY/tsoiepXhspZs1a3fdfedjuwkGmDRRIlGW/8aksftISBoJuVqFOZ44JmOpfDPbAVRiGxgRZ5ks45VSoDTHx8elKj07O8N7z97eHraqODw8YLW3j9GiaBYiKCsNOq1HxbJhGKibirqp0Uoz+AHjNFVV4ZxjsVjMKG9T006jNVGLUHpOJN45TLIcyvi3jP9WZd8lkXmqav68LHmptfCYpS6NuADHJ6ccP7zPyfExm/V5Gnu+DEYYMeEiKD95X5mQ04lbPceOp7zbC0k1RlAmNd0myT41+ohxZz92K1qVHJ31haR/VXGZNf11vP0xhc+ep3iiXY+1VsTDMwU1jlq6OWR1OS4fY4iz5lIMgYAkIs/oBlsOeGrUKBK0kA0qE91MhdQpVzI5lceBvfeEvidaS7ttaWxFvVwUr7Z60WCsplk01HXNMPQQhcGgolTnwXkqU2GU3DjcEGgahQkKFaDvWlRKmNm3rVTxMWCNxWCIupJqHhk4sJVJ1WtFxjtJn322vlDJ062p03JcbiSCGVuyP9vgA922pV23iC6mQAO5kTfn3TKl+V5gWRS8PPrJcAOIcuM4WDJ9/mSH02vkDXRa3Qi4n1gNZLqb/CkoRcQToxmNLssX5zqu4/mKJ1LGykU9oXPljvwUp0XPmQ7l9anJlJe5l97ZlBK9WT+3dpHtSSU5nbbKdC5JvAONEW7rMAwc3rwBStH1PYvBcX62QSvLchlZLhf0w1b2RY8OZN556rpKswFKqGnW4tPYqoJCT1OFFaDKaG9QoO1kmZ8wzqx+lqlfGU/NMEKGEsTUMmkDAzZVuTpR9EKqiLt+w5v3XuXNe3fpurZUlVPcFnLzL+lbqDF55pg2QDMmq5QSuUnlLz2PuxXv7vvups8Yx39DHvjYqYKfxyrnOq7jsUk3V1DOOawaO/X5sYLxpQsqc1fzxTjt6pvEYphRm8qVmRotWlHpaqbZSxw1fHPSz9brQglzJVGHELj3+huYquJbPvgt5b3y8tEHT2Vr2aZ3WG0gBBSB4D06TZz1bgAtUMd0Ce2cw1hhQRC9VOWAqeSzEaWaLzzjfNNImCow255W6XUqlONpzZx+ppTo0BJigl3EYn33OM4nANPgQi41HxcqFtx2mgMvVM6XvTRVzGEyVDGNGBVBJV3gCYQRd24W13Ed08jC+89qPLHSLS4Nmf40wTWL1U5qvgjKMGoWWGsLFDFdAudqdU4PU3jnxtHgSWTsNP+9bdsy7ut9ICBYad/37O/vs1gsePDmPTbbLbdu3abrOqqqoq5qQPRvta2yYBh91+P8AN5QxYgxmkDAMGruFhqcA2NlSe21MB90mpiLIaISBCGvE+cKo8VCPUxWCtMIIaDCRfeK2c9RxNz39g5YLfc40w+k3ziR2mQ8C4jnzm61mhPrhBNM1kOOxFTV71bF85OBcK6z4kaBGiZPEDpLuhEJDCFiN1Kx757/67iOafzsz/4sx8fHV70bb1s8Udpx2kTTWi7+gvcRJ3ieQJVKKUic2DgZHJCl+nxZmilX3nsqbfBx1KQtlZtKTbUY0BGMGuldMUaMliV8cB5bG9Zn5yilWB7ss1zusVwu2d/fZ7VaASJWUzUkviwQA4FApS1Vaor5EGlqgQUyHlsbNUnAfsbeIAjEkI/ZWCFTFNiCD9hFjQKMrtBmdCCOMaLSjUulRlqM0kDTSqb5usFxenZGPwyFTiZV7xjzBCkVr1jsRGLIlfBIQcvPUgWnZTdPT7aWRXdy3k2NOBKrYQo15WeHSYMvitFn9B5HhiUe8+W7jut4RuOpxoBzxEnVAjKhlnm0avKcDEnkJLtbLeUqJ1fR1hiIYKtKSP15+Z2w41lTJ7ntzmhj6bmDG6hsxXa7Zbm/lwYnLF3XYRItzdg8iGHEgiiI71re5yZG6qpCRUVVVzMsWpk0zlwSDpAbUpMEPcO+iYnXqsp+5sRUKHN6rKgLLKGnGUnR9x3n63OOHz4UoR83p1zNj3EesMjDHHN2Q9rrscqdkGZH7dtd1oJQvabnIjt05BXO+Nxc4Y7fhxjBI5OJwYufnPc773Edz0289NJLfOu3futV78aVxBONKedDAJPGSfqbCz7RjhRRjdKOojk7qoWFGAs0UbiquxhvCAVeyO8xhSVkFHjElGOMLJdLcZFQDkLEh6GIi6/Xa5qm4caNw/L6uqqIhJwiISiikc9a14tidZNx4vy6qCTpWJMw4WLnniABpQmzJCJ8Y2N0EW83WQlMx/KzncARMxyX8dhkXnDfinD5smk41YY+XLL8vxBjAoRJxQrlp5gw2RHOGMkoM2y3NMfEmWNc8YwpekzK4++7K6MYAspHuSFdx3MZP/iDP8iP//iPX/VuXEk8NuluNltWq+XkootF6FolUqp08JOEoQJSxZZJ8SqzBCbVXybh65Jwk+hMqu6CD0lHl1IhZqzYJlGa3FhrmkZszr28f27eCU57irWGo6MDDg4OUnKzaE0RuvEDbLctdd1QN4ty01BKJxUxuXHYuhIxd2OIQQRwMm5rTJPobE4qx2z/Yyx6MhABYK28HpjdWMpxyrZABWaIeO84Pzvl3r03OD05Tk4Y0lCbcnRH6CCvSHIVO6k8yzjYxWp2WsWSt2JUonmNkIRKVDXUxGi0LHXGZB1KlRuLxkaGZmKInJ6ePe7rdx3X8UzGYyfSNtt2VnUKSpi61aUiS3P+Ol/IkyoxbWe3CsqCNdGLBGJwTiriCATx69IoTE7ocT4+HGOk73u895ydneGcR2mD81481Yxh6Dq0gu1mzenJCZv1Gu9DqrQrERS3lqgUtlpSNytsVZfPpfWYlPKSP0RJHBlysDYPMBiUNpiqwiatByEPmFLxTZtnGZkYfdmsOFGUW4ws95XQQhj6nuMH93lw/w0ePLjHdnOOH/oR1silpijdSFJPLg4on9gJeYAlZDzgQpKdRhmEAYKKBKUIUf6LQRed3myWqQLgk95v0sbIfOm+7+m6jq5t6dqOdrtlu91y7/7DJ3w935lYrVZ88YtfvOrduI7nJB5b6X7iE5/g+OH9GQyg8wRaFtWOI+0nZgghYZUzLHZn29poSRAxouKoQ1AadSX5afROgijygsxdguu6TsLenrZtaVZL+r5ns9nIkEfdsFqt0FphtTgc26pmuVwVDd6p7U+mek1vFjHKEjm7EefufYxRWAxqxL6VTv+qqXhNRCtbBjyMthhTlepYBh+QpJm2NfQD5+s15+szhqGn7zqcT7KKEYSNoMp+ZI84VCSGVAFrJnVt0lxQj6KGqfK5BLYVjFhgBXksJM1gEAaFT9+FXOFmMaHccJz/6wgBPvaxjz3h6/nOxMHBAT/5kz/JP/kn/+Sqd+W5iE984hN84QtfuOrduLJ4bKX7Uz/1U5OONLJE9UG69bnBNcECZUk9Gj/u0p8uqGaREoFOiVrrYgU+1YjN2wDKkMaUfpYfz9SurHmw2Wxo25au6+iHgbZtOTs7Z7vd4lMTqGkalqsVzWKBtdUs4e5OcuXHxjHoSUNJZUgkf06ISboxHTxJ6KYWgRtlC6ygdYWxNcbWxY9u3EZMFWNH33fi/Ot7QpCEWirOOCbQqGJqSCqKD1oGaRlpXjO8dQYFQAwQfPo3MEucLgZc8AxOdDn6oaftO9quY9u2bFMlu9lsyjnIf5P/Wrbblp/6qf/v6/7Cvl1x69Yt/upf/atXvRvPRXzqU5/iR37kR656N64sHpt0f/AHfzAti9PwgICzZEp8yAKCE1wx460zfYb8eJy47CbO7zRZxulrifgoFkDTBJwr6CwZmTUQYhRhcmE1aCpriD6gEXPLPIHmkj25VhatLJWtWTSiq1BVCXKIY4Ms/v/tXb+PFMkV/qqqu+cH7GhA5jKfCFaWcXqO8UnESMiBJQuBEIE5+A9syTmBDRLBEV1+PieIk3WyLFvmctsJGTLODXhv92Znuqu6qhy8elXVs2tAtpixlvpWo92dnemp7en56tWr730PQJ31UJNSom5qOO+gjY6RPudikxKBjHWUyqVaaQKK50xKyCqoKaSCkqEnmxRBLQCY3qBtl9kEkKJUWtdz4iekehxNhunetLlFv62XDIfncITLBAxSqNjsPbDWhglAw2iNrqNJretaIteWiLVtO3SdRtfp+LPWBlqb2GLo8uXL/9VF+y4wn89x8+bNbQ+j4D3AG70X9vf3sbNzOmyEJWmRgoT14QPtcw1rIltWGkTzllypEKItfh2pkl3igJSySJPLd3OZGKcExuMxeeOuONLt0VuLSin0xmCxWGAynqKZN5iMp2Bvg7oehVy0oLxsBSjBUi5FVWq5bjiQkQxqjtj7TUgqVJD0PBqfiORNhSTp/nh8RT4KPGFJWUUrTCbW1arF3jd7ODjYh9YmRJ8uMLmLeWMb+tNBiLiZxT3rWB9N55bbBKeJEggm9fy+BIa2LtlwpkpBm0W+lCdOKwwX73fOD66H/D01xkRvj4ICxpMnT7C/v7/tYbxTvPaKr6oKn3/+BT65dQtWcENF9tKlUti0xB6mAmI6Id1J6YRjKpFihLtWpZa/Hj/GCvIm4L9z1RsTAkfAbCnZtit07YQ8bPECAG2IfefcuUDyDt4Po1RuApmPQwgiy1zelX8/bsyk5RXB0UxCKqqCI/9c8twVQsWWRGkFkHdkoBSFyie0KPViRW3KHQOhT10o713XT+fnnI9BkS/J+lhZweePU0h5eoE1zaSsYCJmZUIiY0AMCDeWAEuJ33zx20K4BUfw/Plz6vN3gvHa9IKUEj/6+GO8evUqtM9JS83oFpVpS/mDzCXAMiPQo85kPvr0sj1i3v4nJ728cotzuvkGmvceXdfFzRuucgOC3OrwMHb61VrDGBMfz80rmQB47Ozvq9YkXzzJ5KS7TsC8AaeUQqWaQTScyoDJgYz8F2iikEqFnHhO+LTkH08mmM/nmEwmqWW8zzpFZFEu22mmUt2hdGvwuw9dlm0P0/fobY9Oa3RGQxuNru1C+mB4o24bbUwhdF0Xzy0T8nqEy9+N6XHp0qUj5dDbxu7uLh4+fLjtYRSccLyxG3DTNPjz11/jJz++AtPrWFbgLIn8hRdh5zpJqYAQYWWkW68VV6i1dIIQAlUo5/XBBNxZCwhaylPhQzJSz4sKckWDkFRA0TQNutAUcjIZU4cFa9E0VZSDsXetdwJNU0PrLh63GY2wWq3QsH5WBWKULpBnDSnTJMKkqmRIpQRFgayo6y8pE1iVEfxbBbt6VVCSxHfOh47AUsJ5Sw0tvYubUdY69H3IcQtuvBzSA1zk4BG9MqKaLOpsqdtHlADGScxSKsHSGLwLhNznES5FtOzy5lwWa3OqCYnwhafrwwtAOgHnLerxGH/80x8wmUz+pwv3XWA6neL69evw3uPOnTvbHk7BCcVrQw3vyW5wd3cXo+k05Cypt1ldVZkAN0VTXIGWR1X8Ic1JkpF7GNhQYOHD80WlovyMSTrPG0f3sDWbybquo/TLGIPDxSG6toUxBsvVEtbZ2BGDIuMKJIaTaJpRbOc+nU4hGzIdz6Vk1lr4oH4g+VeTRbFV3BSrqxpSKMBLKFkBglIMHrSBxnlkLoagdENwB/OAsx59kL/1QfPati1MIDznj6YM+Iu312wQnHgPOEdpARNuvLHFN8OrgJYi2a7tSJWgOcKliJbzsdYmWRinFeIqhUldJLkdpMS5Dz7AhQvfP5Ji2jQePHhw7P3T6RQ3btzA/fv3NzyigsePH78XXTzeqhvwbDbDl1/+DtVoBB9yfUyiFL36wdL6dSmHHEyc3KHChQ9qb/ugi0gdiDmdwOMChh0sVivyyR2Px/SPBZIej8eDtjsvX77E3t4eEfByia4zsJY6/CpVAZ4Ivq5ZPqZQheU8bYZR8YFHGJOikoacbKWoIQRH9iJE4RJK1nHTkf4HCSmGBRZsKkTnR6DXPdq2xXK5xOFyCWN0POf5uWBQcQJFtjbkabnTBU806zci8y7c2pgq4O9aa+jexAj7uEac61pfNvphaZ3uDaSq8dVXv8eZM2e33q1hb2/vPxLrZDLB7du3cffu3Q2P6uTj4sWL+Oyzz47922KxSCZSJxhvJF2ANqu+++GH+PkvfgnnHIzWVP0V87CIRQlp6Tnc8V6XHfFjBtravoc2JpYae+9RBVJumjqSIUvF+IM+nU4xm82ws7NDFo5NA2ctuq7Dt98uYsqgbVucPXsW8/k8i8xdjKKVqgcbZ4l4ZSJKIQaWlXQXRa1cXUuROaUTeutilEmvmXqWkYqBjyupKadIuW2tNRYL0hV7AKpS1CsNwxLb9XytCxVhFNXaQXUY51zzvCwTbU6y/Hh+X1NEezRXm7brAnwohomVw7Ri+tW9X+P8+fPhHG/fS/fg4AD37t079m+j0Qg7OzsbHtHJR13XmM1m2x7GVvHW28eTyQRXr17FdDrFJ7d+hjOzGazro+ieiyJMcBc7KhFLm19sQM4376mSbVw3EKEyrO/p2Dzz9dnPknuKSVqWd6s2Vp1NRmNSMIDImjfsqqqCsxbLxRJ7/9rHctxiNBqhbir0zqBWFeqaWhMBZIVYx2aZgPMpsleVgjUuFigIyY5hIWUgUscL56lcVwgB4SxUTV11lWogISGRiFdK8txFKNU1WuPli3/iH8//jlcvXkB3XdrIHNaXJSIMqQTvkrlMnubJJ7xk25kmhXzD0zry2YX3oaSNfMmcEKgg0IdcPryHCsPhkTnvgWAQ1HUGnz78FFeuXImrmrfoiboRaK23PYSC9wxvlV4AiDRPnz6Nq1d/ir/89W/43oUf4JuDQ9jexZJgH56jwu5/bsrNOdH0oUOUdklJuVRVqUgedVWhFjKQYPKezY8Zo2WVcr1d12G1WkEqiaZpooQs32yTUmE6PYXRaBKaJJKS4NSpGZpmjLoeQ6kqmo4rpVBX1HGYx8uRqvNDVUY0rMmUD/zasqqDi1qoeqtqSO5EAawdh/9nVm4QgUbSDcoD51MFnrMefe9ilGoztUZMEwTPCo5g2cGM87SDohYkXTJtlvXwgdUtPFUsO8ojOEEbeXwNWA9o7fHRRz/E06dPce3aNczn82OvrYKCR48e4dmzZ9sexkYg1nNxBQUFBQXvDv8fa7yCgoKC9wSFdAsKCgo2iEK6BQUFBRtEId2CgoKCDaKQbkFBQcEGUUi3oKCgYIP4N6SqeECFO87qAAAAAElFTkSuQmCC\n"
},
"metadata": {
"needs_background": "light"
}
}
],
"source": [
"# 训练数据集\n",
"train_dataset = PetDataset(train_images_path, label_images_path, mode='train')\n",
"\n",
"# 验证数据集\n",
"val_dataset = PetDataset(train_images_path, label_images_path, mode='test')\n",
"\n",
"# 抽样一个数据\n",
"image, label = train_dataset[0]\n",
"\n",
"# 进行图片的展示\n",
"plt.figure()\n",
"\n",
"plt.subplot(1,2,1), \n",
"plt.title('Train Image')\n",
"plt.imshow(image.transpose((1, 2, 0)).astype('uint8'))\n",
"plt.axis('off')\n",
"\n",
"plt.subplot(1,2,2), \n",
"plt.title('Label')\n",
"plt.imshow(np.squeeze(label, axis=0).astype('uint8'), cmap='gray')\n",
"plt.axis('off')\n",
"\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "d9JyZz3ZEnQ1"
},
"source": [
"## 4.模型组网\n",
"\n",
"U-Net是一个U型网络结构,可以看做两个大的阶段,图像先经过Encoder编码器进行下采样得到高级语义特征图,再经过Decoder解码器上采样将特征图恢复到原图片的分辨率。"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "wi-ouGZL--BN"
},
"source": [
"### 4.1 定义SeparableConv2d接口\n",
"\n",
"我们为了减少卷积操作中的训练参数来提升性能,是继承paddle.nn.Layer自定义了一个SeparableConv2d Layer类,整个过程是把`filter_size * filter_size * num_filters`的Conv2d操作拆解为两个子Conv2d,先对输入数据的每个通道使用`filter_size * filter_size * 1`的卷积核进行计算,输入输出通道数目相同,之后在使用`1 * 1 * num_filters`的卷积核计算。"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "0c-FikH-A4qP"
},
"outputs": [],
"source": [
"class SeparableConv2d(paddle.nn.Layer):\n",
" def __init__(self, \n",
" in_channels, \n",
" out_channels, \n",
" kernel_size, \n",
" stride=1, \n",
" padding=0, \n",
" dilation=1, \n",
" groups=None, \n",
" weight_attr=None, \n",
" bias_attr=None, \n",
" data_format=\"NCHW\"):\n",
" super(SeparableConv2d, self).__init__()\n",
" # 第一次卷积操作没有偏置参数\n",
" self.conv_1 = paddle.nn.Conv2d(in_channels, \n",
" in_channels, \n",
" kernel_size, \n",
" stride=stride,\n",
" padding=padding,\n",
" dilation=dilation,\n",
" groups=in_channels, \n",
" weight_attr=weight_attr, \n",
" bias_attr=False, \n",
" data_format=data_format)\n",
" self.pointwise = paddle.nn.Conv2d(in_channels, \n",
" out_channels, \n",
" 1, \n",
" stride=1, \n",
" padding=0, \n",
" dilation=1, \n",
" groups=1, \n",
" weight_attr=weight_attr, \n",
" data_format=data_format)\n",
" \n",
" def forward(self, inputs):\n",
" y = self.conv_1(inputs)\n",
" y = self.pointwise(y)\n",
"\n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "zNyzlqQmBEEi"
},
"source": [
"### 4.2 定义Encoder编码器\n",
"\n",
"我们将网络结构中的Encoder下采样过程进行了一个Layer封装,方便后续调用,减少代码编写,下采样是有一个模型逐渐向下画曲线的一个过程,这个过程中是不断的重复一个单元结构将通道数不断增加,形状不断缩小,并且引入残差网络结构,我们将这些都抽象出来进行统一封装。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "OpUi9VUeGmXp"
},
"outputs": [],
"source": [
"class Encoder(paddle.nn.Layer):\n",
" def __init__(self, in_channels, out_channels):\n",
" super(Encoder, self).__init__()\n",
" \n",
" self.relu = paddle.nn.ReLU()\n",
" self.separable_conv_01 = SeparableConv2d(in_channels, \n",
" out_channels, \n",
" kernel_size=3, \n",
" padding='same')\n",
" self.bn = paddle.nn.BatchNorm2d(out_channels)\n",
" self.separable_conv_02 = SeparableConv2d(out_channels, \n",
" out_channels, \n",
" kernel_size=3, \n",
" padding='same')\n",
" self.pool = paddle.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)\n",
" self.residual_conv = paddle.nn.Conv2d(in_channels, \n",
" out_channels, \n",
" kernel_size=1, \n",
" stride=2, \n",
" padding='same')\n",
"\n",
" def forward(self, inputs):\n",
" previous_block_activation = inputs\n",
" \n",
" y = self.relu(inputs)\n",
" y = self.separable_conv_01(y)\n",
" y = self.bn(y)\n",
" y = self.relu(y)\n",
" y = self.separable_conv_02(y)\n",
" y = self.bn(y)\n",
" y = self.pool(y)\n",
" \n",
" residual = self.residual_conv(previous_block_activation)\n",
" y = paddle.add(y, residual)\n",
"\n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "nPBRD42WGmuH"
},
"source": [
"### 4.3 定义Decoder解码器\n",
"\n",
"在通道数达到最大得到高级语义特征图后,网络结构会开始进行decode操作,进行上采样,通道数逐渐减小,对应图片尺寸逐步增加,直至恢复到原图像大小,那么这个过程里面也是通过不断的重复相同结构的残差网络完成,我们也是为了减少代码编写,将这个过程定义一个Layer来放到模型组网中使用。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "ltVurq8OGvK7"
},
"outputs": [],
"source": [
"class Decoder(paddle.nn.Layer):\n",
" def __init__(self, in_channels, out_channels):\n",
" super(Decoder, self).__init__()\n",
"\n",
" self.relu = paddle.nn.ReLU()\n",
" self.conv_transpose_01 = paddle.nn.ConvTranspose2d(in_channels, \n",
" out_channels, \n",
" kernel_size=3, \n",
" padding='same')\n",
" self.conv_transpose_02 = paddle.nn.ConvTranspose2d(out_channels, \n",
" out_channels, \n",
" kernel_size=3, \n",
" padding='same')\n",
" self.bn = paddle.nn.BatchNorm2d(out_channels)\n",
" self.upsample = paddle.nn.UpSample(scale_factor=2.0)\n",
" self.residual_conv = paddle.nn.Conv2d(in_channels, \n",
" out_channels, \n",
" kernel_size=1, \n",
" padding='same')\n",
"\n",
" def forward(self, inputs):\n",
" previous_block_activation = inputs\n",
"\n",
" y = self.relu(inputs)\n",
" y = self.conv_transpose_01(y)\n",
" y = self.bn(y)\n",
" y = self.relu(y)\n",
" y = self.conv_transpose_02(y)\n",
" y = self.bn(y)\n",
" y = self.upsample(y)\n",
" \n",
" residual = self.upsample(previous_block_activation)\n",
" residual = self.residual_conv(residual)\n",
" \n",
" y = paddle.add(y, residual)\n",
" \n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "vLKLj2FMGvdc"
},
"source": [
"### 4.4 训练模型组网\n",
"\n",
"按照U型网络结构格式进行整体的网络结构搭建,三次下采样,四次上采样。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "an1YFILpG4Xy"
},
"outputs": [],
"source": [
"class PetModel(paddle.nn.Layer):\n",
" def __init__(self, num_classes):\n",
" super(PetModel, self).__init__()\n",
"\n",
" self.conv_1 = paddle.nn.Conv2d(3, 32, \n",
" kernel_size=3,\n",
" stride=2,\n",
" padding='same')\n",
" self.bn = paddle.nn.BatchNorm2d(32)\n",
" self.relu = paddle.nn.ReLU()\n",
"\n",
" in_channels = 32\n",
" self.encoders = []\n",
" self.encoder_list = [64, 128, 256]\n",
" self.decoder_list = [256, 128, 64, 32]\n",
"\n",
" # 根据下采样个数和配置循环定义子Layer,避免重复写一样的程序\n",
" for out_channels in self.encoder_list:\n",
" block = self.add_sublayer('encoder_%s'.format(out_channels),\n",
" Encoder(in_channels, out_channels))\n",
" self.encoders.append(block)\n",
" in_channels = out_channels\n",
"\n",
" self.decoders = []\n",
"\n",
" # 根据上采样个数和配置循环定义子Layer,避免重复写一样的程序\n",
" for out_channels in self.decoder_list:\n",
" block = self.add_sublayer('decoder_%s'.format(out_channels), \n",
" Decoder(in_channels, out_channels))\n",
" self.decoders.append(block)\n",
" in_channels = out_channels\n",
"\n",
" self.output_conv = paddle.nn.Conv2d(in_channels, \n",
" num_classes, \n",
" kernel_size=3, \n",
" padding='same')\n",
" \n",
" def forward(self, inputs):\n",
" y = self.conv_1(inputs)\n",
" y = self.bn(y)\n",
" y = self.relu(y)\n",
" \n",
" for encoder in self.encoders:\n",
" y = encoder(y)\n",
"\n",
" for decoder in self.decoders:\n",
" y = decoder(y)\n",
" \n",
" y = self.output_conv(y)\n",
" \n",
" return y"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "6Nf7hQ60G4sj"
},
"source": [
"### 4.5 模型可视化\n",
"\n",
"调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。\n",
"@TODO,需要替换"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 1000
},
"colab_type": "code",
"id": "1_MXfWkZeSdE",
"outputId": "4c9870de-9eb6-47e8-e88c-79509ef78cf5",
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "--------------------------------------------------------------------------------\n Layer (type) Input Shape Output Shape Param #\n================================================================================\n Conv2d-22 [-1, 3, 160, 160] [-1, 32, 80, 80] 896\n BatchNorm2d-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 64\n ReLU-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 0\n ReLU-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0\n Conv2d-33 [-1, 128, 20, 20] [-1, 128, 20, 20] 1,152\n Conv2d-34 [-1, 128, 20, 20] [-1, 256, 20, 20] 33,024\nSeparableConv2d-11 [-1, 128, 20, 20] [-1, 256, 20, 20] 0\n BatchNorm2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 512\n Conv2d-35 [-1, 256, 20, 20] [-1, 256, 20, 20] 2,304\n Conv2d-36 [-1, 256, 20, 20] [-1, 256, 20, 20] 65,792\nSeparableConv2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0\n MaxPool2d-6 [-1, 256, 20, 20] [-1, 256, 10, 10] 0\n Conv2d-37 [-1, 128, 20, 20] [-1, 256, 10, 10] 33,024\n Encoder-6 [-1, 128, 20, 20] [-1, 256, 10, 10] 0\n ReLU-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 0\nConvTranspose2d-15 [-1, 64, 80, 80] [-1, 32, 80, 80] 18,464\n BatchNorm2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 64\nConvTranspose2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 9,248\n UpSample-8 [-1, 64, 80, 80] [-1, 64, 160, 160] 0\n Conv2d-41 [-1, 64, 160, 160] [-1, 32, 160, 160] 2,080\n Decoder-8 [-1, 64, 80, 80] [-1, 32, 160, 160] 0\n Conv2d-42 [-1, 32, 160, 160] [-1, 4, 160, 160] 1,156\n================================================================================\nTotal params: 167,780\nTrainable params: 167,780\nNon-trainable params: 0\n--------------------------------------------------------------------------------\nInput size (MB): 0.29\nForward/backward pass size (MB): 43.16\nParams size (MB): 0.64\nEstimated Total Size (MB): 44.10\n--------------------------------------------------------------------------------\n\n"
},
{
"output_type": "execute_result",
"data": {
"text/plain": "{'total_params': 167780, 'trainable_params': 167780}"
},
"metadata": {},
"execution_count": 11
}
],
"source": [
"from paddle.static import InputSpec\n",
"\n",
"paddle.disable_static()\n",
"num_classes = 4\n",
"model = paddle.Model(PetModel(num_classes))\n",
"model.summary((3, 160, 160))"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "j9Trlcvj8R7L"
},
"source": [
"## 5.模型训练"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "8Sskbyz58X4J"
},
"source": [
"### 5.1 配置信息\n",
"\n",
"定义训练BATCH_SIZE、训练轮次和计算设备等信息。"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "4fSkTiRB8OpP"
},
"outputs": [],
"source": [
"BATCH_SIZE = 32\n",
"EPOCHS = 15\n",
"device = paddle.set_device('gpu')\n",
"paddle.disable_static(device)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "x_vaedRa8eoy"
},
"source": [
"### 5.3 自定义Loss\n",
"\n",
"在这个任务中我们使用SoftmaxWithCrossEntropy损失函数来做计算,飞桨中有functional形式的API,这里我们做一个自定义操作,实现一个Class形式API放到模型训练中使用。没有直接使用CrossEntropyLoss的原因主要是对计算维度的自定义需求,本次需要进行softmax计算的维度是1,不是默认的最后一维,所以我们采用上面提到的损失函数,通过axis参数来指定softmax计算维度。"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "AEZq_jT78jNe"
},
"outputs": [],
"source": [
"class SoftmaxWithCrossEntropy(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(SoftmaxWithCrossEntropy, self).__init__()\n",
"\n",
" def forward(self, input, label):\n",
" loss = F.softmax_with_cross_entropy(input, \n",
" label, \n",
" return_softmax=False,\n",
" axis=1)\n",
" return paddle.mean(loss)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "rj6MPPMkJIdZ"
},
"source": [
"### 5.4 启动模型训练\n",
"\n",
"使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {
"base_uri": "https://localhost:8080/",
"height": 51
},
"colab_type": "code",
"id": "m-cVyjNreSdO",
"outputId": "9b37dd07-746b-41cc-c8e2-687a83b1ad75",
"tags": []
},
"outputs": [],
"source": [
"optim = paddle.optimizer.RMSProp(learning_rate=0.001, \n",
" rho=0.9, \n",
" momentum=0.0, \n",
" epsilon=1e-07, \n",
" centered=False,\n",
" parameters=model.parameters())\n",
"model = paddle.Model(PetModel(num_classes, model_tools))\n",
"model.prepare(optim, \n",
" SoftmaxWithCrossEntropy())\n",
"\n",
"model.fit(train_dataset, \n",
" val_dataset, \n",
" epochs=EPOCHS, \n",
" batch_size=BATCH_SIZE\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "-mouwS1kJRqJ"
},
"source": [
"## 6.模型预测"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "Dvjxu91DJd1G"
},
"source": [
"### 6.1 预测数据集准备和预测\n",
"\n",
"继续使用PetDataset来实例化待预测使用的数据集。这里我们为了方便没有在另外准备预测数据,复用了评估数据。\n",
"\n",
"我们可以直接使用model.predict接口来对数据集进行预测操作,只需要将预测数据集传递到接口内即可。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "Ur088_vjeSdR"
},
"outputs": [],
"source": [
"predict_results = model.predict(val_dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {
"colab_type": "text",
"id": "-DpAEFBSJioy"
},
"source": [
"### 6.2 预测结果可视化\n",
"\n",
"从我们的预测数据集中抽3个动物来看看预测的效果,展示一下原图、标签图和预测结果。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {
"colab": {},
"colab_type": "code",
"id": "1mfaFkO5S1PU"
},
"outputs": [],
"source": [
"plt.figure(figsize=(10, 10))\n",
"\n",
"i = 0\n",
"mask_idx = 0\n",
"\n",
"for data in val_dataset:\n",
" if i > 8: \n",
" break\n",
" plt.subplot(3, 3, i + 1)\n",
" plt.imshow(data[0].transpose((1, 2, 0)).astype('uint8'))\n",
" plt.title('Input Image')\n",
" plt.axis(\"off\")\n",
"\n",
" plt.subplot(3, 3, i + 2)\n",
" plt.imshow(np.squeeze(data[1], axis=0).astype('uint8'), cmap='gray')\n",
" plt.title('Label')\n",
" plt.axis(\"off\")\n",
" \n",
" \n",
" data = val_preds[0][mask_idx][0].transpose((1, 2, 0))\n",
" mask = np.argmax(data, axis=-1)\n",
" mask = np.expand_dims(mask, axis=-1)\n",
"\n",
" plt.subplot(3, 3, i + 3)\n",
" plt.imshow(np.squeeze(mask, axis=2).astype('uint8'), cmap='gray')\n",
" plt.title('Predict')\n",
" plt.axis(\"off\")\n",
" i += 3\n",
" mask_idx += 1\n",
"\n",
"plt.show()"
]
}
],
"metadata": {
"accelerator": "GPU",
"colab": {
"collapsed_sections": [],
"name": "pets_image_segmentation_U_Net_like.ipynb",
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3.7.4 64-bit",
"language": "python",
"name": "python_defaultSpec_1599452401282"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4-final"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
\ No newline at end of file
基于U型语义分割模型实现的宠物图像分割
=====================================
本示例教程当前是基于2.0-beta版本Paddle做的案例实现,未来会随着2.0的系列版本发布进行升级。
1.简要介绍
----------
在计算机视觉领域,图像分割指的是将数字图像细分为多个图像子区域的过程。图像分割的目的是简化或改变图像的表示形式,使得图像更容易理解和分析。图像分割通常用于定位图像中的物体和边界(线,曲线等)。更精确的,图像分割是对图像中的每个像素加标签的一个过程,这一过程使得具有相同标签的像素具有某种共同视觉特性。图像分割的领域非常多,无人车、地块检测、表计识别等等。
本示例简要介绍如何通过飞桨开源框架,实现图像分割。这里我们是采用了一个在图像分割领域比较熟知的U-Net网络结构,是一个基于FCN做改进后的一个深度学习网络,包含下采样(编码器,特征提取)和上采样(解码器,分辨率还原)两个阶段,因模型结构比较像U型而命名为U-Net
2.环境设置
----------
导入一些比较基础常用的模块,确认自己的飞桨版本。
.. code:: ipython3
import os
import io
import numpy as np
import matplotlib.pyplot as plt
from PIL import Image as PilImage
import paddle
from paddle.nn import functional as F
paddle.__version__
.. parsed-literal::
'0.0.0'
3.数据集
--------
3.1 数据集下载
~~~~~~~~~~~~~~
本案例使用Oxford-IIIT
Pet数据集,官网:https://www.robots.ox.ac.uk/~vgg/data/pets
数据集统计如下:
.. figure:: https://www.robots.ox.ac.uk/~vgg/data/pets/breed_count.jpg
:alt: alt 数据集统计信息
alt 数据集统计信息
数据集包含两个压缩文件:
1. 原图:https://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
2. 分割图像:https://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
.. code:: ipython3
!curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/images.tar.gz
!curl -O http://www.robots.ox.ac.uk/~vgg/data/pets/data/annotations.tar.gz
!tar -xf images.tar.gz
!tar -xf annotations.tar.gz
3.2 数据集概览
~~~~~~~~~~~~~~
首先我们先看看下载到磁盘上的文件结构是什么样,来了解一下我们的数据集。
1. 首先看一下images.tar.gz这个压缩包,该文件解压后得到一个images目录,这个目录比较简单,里面直接放的是用类名和序号命名好的图片文件,每个图片是对应的宠物照片。
.. code:: bash
.
├── samoyed_7.jpg
├── ......
└── samoyed_81.jpg
2. 然后我们在看下annotations.tar.gz,文件解压后的目录里面包含以下内容,目录中的README文件将每个目录和文件做了比较详细的介绍,我们可以通过README来查看每个目录文件的说明。
.. code:: bash
.
├── README
├── list.txt
├── test.txt
├── trainval.txt
├── trimaps
│   ├── Abyssinian_1.png
   ├── Abyssinian_10.png
   ├── ......
│    └── yorkshire_terrier_99.png
└── xmls
├── Abyssinian_1.xml
├── Abyssinian_10.xml
├── ......
└── yorkshire_terrier_190.xml
本次我们主要使用到imagesannotations/trimaps两个目录,即原图和三元图像文件,前者作为训练的输入数据,后者是对应的标签数据。
我们来看看这个数据集给我们提供了多少个训练样本。
.. code:: ipython3
train_images_path = "images/"
label_images_path = "annotations/trimaps/"
print("用于训练的图片样本数量:", len([os.path.join(train_images_path, image_name)
for image_name in os.listdir(train_images_path)
if image_name.endswith('.jpg')]))
.. parsed-literal::
用于训练的图片样本数量: 7390
3.3 数据集类定义
~~~~~~~~~~~~~~~~
飞桨(PaddlePaddle)数据集加载方案是统一使用Dataset(数据集定义) +
DataLoader(多进程数据集加载)。
首先我们先进行数据集的定义,数据集定义主要是实现一个新的Dataset类,继承父类paddle.io.Dataset,并实现父类中以下两个抽象方法,\ ``__getitem__``\ \ ``__len__``\
.. code:: python
class MyDataset(Dataset):
def __init__(self):
...
# 每次迭代时返回数据和对应的标签
def __getitem__(self, idx):
return x, y
# 返回整个数据集的总数
def __len__(self):
return count(samples)
在数据集内部可以结合图像数据预处理相关API进行图像的预处理(改变大小、反转、调整格式等)。
由于加载进来的图像不一定都符合自己的需求,举个例子,已下载的这些图片里面就会有RGBA格式的图片,这个时候图片就不符合我们所需3通道的需求,我们需要进行图片的格式转换,那么这里我们直接实现了一个通用的图片读取接口,确保读取出来的图片都是满足我们的需求。
另外图片加载出来的默认shapeHWC,这个时候要看看是否满足后面训练的需要,如果Layer的默认格式和这个不是符合的情况下,需要看下Layer有没有参数可以进行格式调整。不过如果layer较多的话,还是直接调整原数据Shape比较好,否则每个layer都要做参数设置,如果有遗漏就会导致训练出错,那么在本案例中是直接对数据源的shape做了统一调整,从HWC转换成了CHW,因为飞桨的卷积等API的默认输入格式为CHW,这样处理方便后续模型训练。
.. code:: ipython3
import random
from paddle.io import Dataset
from paddle.vision.transforms import transforms
class ImgTranspose(object):
"""
图像预处理工具,用于将Mask图像进行升维(160, 160) => (160, 160, 1),
并对图像的维度进行转换从HWC变为CHW
"""
def __init__(self, fmt):
self.format = fmt
def __call__(self, img):
if len(img.shape) == 2:
img = np.expand_dims(img, axis=2)
return img.transpose(self.format)
class PetDataset(Dataset):
"""
数据集定义
"""
def __init__(self, image_path, label_path, mode='train'):
"""
构造函数
"""
self.image_size = (160, 160)
self.image_path = image_path
self.label_path = label_path
self.mode = mode.lower()
self.eval_image_num = 1000
assert self.mode in ['train', 'test'], \
"mode should be 'train' or 'test', but got {}".format(self.mode)
self._parse_dataset()
self.transforms = transforms.Compose([
ImgTranspose((2, 0, 1))
])
def _sort_images(self, image_dir, image_type):
"""
对文件夹内的图像进行按照文件名排序
"""
files = []
for image_name in os.listdir(image_dir):
if image_name.endswith('.{}'.format(image_type)) \
and not image_name.startswith('.'):
files.append(os.path.join(image_dir, image_name))
return sorted(files)
def _parse_dataset(self):
"""
由于所有文件都是散落在文件夹中,在训练时我们需要使用的是数据集和标签对应的数据关系,
所以我们第一步是对原始的数据集进行整理,得到数据集和标签两个数组,分别一一对应。
这样可以在使用的时候能够很方便的找到原始数据和标签的对应关系,否则对于原有的文件夹图片数据无法直接应用。
在这里是用了一个非常简单的方法,按照文件名称进行排序。
因为刚好数据和标签的文件名是按照这个逻辑制作的,名字都一样,只有扩展名不一样。
"""
temp_train_images = self._sort_images(self.image_path, 'jpg')
temp_label_images = self._sort_images(self.label_path, 'png')
random.Random(1337).shuffle(temp_train_images)
random.Random(1337).shuffle(temp_label_images)
if self.mode == 'train':
self.train_images = temp_train_images[:-self.eval_image_num]
self.label_images = temp_label_images[:-self.eval_image_num]
else:
self.train_images = temp_train_images[-self.eval_image_num:]
self.label_images = temp_label_images[-self.eval_image_num:]
def _load_img(self, path, color_mode='rgb'):
"""
统一的图像处理接口封装,用于规整图像大小和通道
"""
with open(path, 'rb') as f:
img = PilImage.open(io.BytesIO(f.read()))
if color_mode == 'grayscale':
# if image is not already an 8-bit, 16-bit or 32-bit grayscale image
# convert it to an 8-bit grayscale image.
if img.mode not in ('L', 'I;16', 'I'):
img = img.convert('L')
elif color_mode == 'rgba':
if img.mode != 'RGBA':
img = img.convert('RGBA')
elif color_mode == 'rgb':
if img.mode != 'RGB':
img = img.convert('RGB')
else:
raise ValueError('color_mode must be "grayscale", "rgb", or "rgba"')
if self.image_size is not None:
if img.size != self.image_size:
img = img.resize(self.image_size, PilImage.NEAREST)
return img
def __getitem__(self, idx):
"""
返回 image, label
"""
# 花了比较多的时间在数据处理这里,需要处理成模型能适配的格式,踩了一些坑(比如有不是RGB格式的)
# 有图片会出现通道数和期望不符的情况,需要进行相关考虑
# 加载原始图像
train_image = self._load_img(self.train_images[idx])
x = np.array(train_image, dtype='float32')
# 对图像进行预处理,统一大小,转换维度格式(HWC => CHW
x = self.transforms(x)
# 加载Label图像
label_image = self._load_img(self.label_images[idx], color_mode="grayscale")
y = np.array(label_image, dtype='uint8')
# 图像预处理
# Label图像是二维的数组(size, size),升维到(size, size, 1)后才能用于最后loss计算
y = self.transforms(y)
# 返回img, label,转换为需要的格式
return x, y.astype('int64')
def __len__(self):
"""
返回数据集总数
"""
return len(self.train_images)
3.4 PetDataSet数据集抽样展示
~~~~~~~~~~~~~~~~~~~~~~~~~~~~
实现好Dataset数据集后,我们来测试一下数据集是否符合预期,因为Dataset是一个可以被迭代的Class,我们通过for循环从里面读取数据来用matplotlib进行展示,这里要注意的是对于分割的标签文件因为是1通道的灰度图片,需要在使用imshow接口时注意下传参cmap=gray’。
.. code:: ipython3
# 训练数据集
train_dataset = PetDataset(train_images_path, label_images_path, mode='train')
# 验证数据集
val_dataset = PetDataset(train_images_path, label_images_path, mode='test')
# 抽样一个数据
image, label = train_dataset[0]
# 进行图片的展示
plt.figure()
plt.subplot(1,2,1),
plt.title('Train Image')
plt.imshow(image.transpose((1, 2, 0)).astype('uint8'))
plt.axis('off')
plt.subplot(1,2,2),
plt.title('Label')
plt.imshow(np.squeeze(label, axis=0).astype('uint8'), cmap='gray')
plt.axis('off')
plt.show()
.. image:: pets_image_segmentation_U_Net_like_files/pets_image_segmentation_U_Net_like_12_0.svg
4.模型组网
----------
U-Net是一个U型网络结构,可以看做两个大的阶段,图像先经过Encoder编码器进行下采样得到高级语义特征图,再经过Decoder解码器上采样将特征图恢复到原图片的分辨率。
4.1 定义SeparableConv2d接口
~~~~~~~~~~~~~~~~~~~~~~~~~~~
我们为了减少卷积操作中的训练参数来提升性能,是继承paddle.nn.Layer自定义了一个SeparableConv2d
Layer类,整个过程是把\ ``filter_size * filter_size * num_filters``\ Conv2d操作拆解为两个子Conv2d,先对输入数据的每个通道使用\ ``filter_size * filter_size * 1``\ 的卷积核进行计算,输入输出通道数目相同,之后在使用\ ``1 * 1 * num_filters``\ 的卷积核计算。
.. code:: ipython3
class SeparableConv2d(paddle.nn.Layer):
def __init__(self,
in_channels,
out_channels,
kernel_size,
stride=1,
padding=0,
dilation=1,
groups=None,
weight_attr=None,
bias_attr=None,
data_format="NCHW"):
super(SeparableConv2d, self).__init__()
# 第一次卷积操作没有偏置参数
self.conv_1 = paddle.nn.Conv2d(in_channels,
in_channels,
kernel_size,
stride=stride,
padding=padding,
dilation=dilation,
groups=in_channels,
weight_attr=weight_attr,
bias_attr=False,
data_format=data_format)
self.pointwise = paddle.nn.Conv2d(in_channels,
out_channels,
1,
stride=1,
padding=0,
dilation=1,
groups=1,
weight_attr=weight_attr,
data_format=data_format)
def forward(self, inputs):
y = self.conv_1(inputs)
y = self.pointwise(y)
return y
4.2 定义Encoder编码器
~~~~~~~~~~~~~~~~~~~~~
我们将网络结构中的Encoder下采样过程进行了一个Layer封装,方便后续调用,减少代码编写,下采样是有一个模型逐渐向下画曲线的一个过程,这个过程中是不断的重复一个单元结构将通道数不断增加,形状不断缩小,并且引入残差网络结构,我们将这些都抽象出来进行统一封装。
.. code:: ipython3
class Encoder(paddle.nn.Layer):
def __init__(self, in_channels, out_channels):
super(Encoder, self).__init__()
self.relu = paddle.nn.ReLU()
self.separable_conv_01 = SeparableConv2d(in_channels,
out_channels,
kernel_size=3,
padding='same')
self.bn = paddle.nn.BatchNorm2d(out_channels)
self.separable_conv_02 = SeparableConv2d(out_channels,
out_channels,
kernel_size=3,
padding='same')
self.pool = paddle.nn.MaxPool2d(kernel_size=3, stride=2, padding=1)
self.residual_conv = paddle.nn.Conv2d(in_channels,
out_channels,
kernel_size=1,
stride=2,
padding='same')
def forward(self, inputs):
previous_block_activation = inputs
y = self.relu(inputs)
y = self.separable_conv_01(y)
y = self.bn(y)
y = self.relu(y)
y = self.separable_conv_02(y)
y = self.bn(y)
y = self.pool(y)
residual = self.residual_conv(previous_block_activation)
y = paddle.add(y, residual)
return y
4.3 定义Decoder解码器
~~~~~~~~~~~~~~~~~~~~~
在通道数达到最大得到高级语义特征图后,网络结构会开始进行decode操作,进行上采样,通道数逐渐减小,对应图片尺寸逐步增加,直至恢复到原图像大小,那么这个过程里面也是通过不断的重复相同结构的残差网络完成,我们也是为了减少代码编写,将这个过程定义一个Layer来放到模型组网中使用。
.. code:: ipython3
class Decoder(paddle.nn.Layer):
def __init__(self, in_channels, out_channels):
super(Decoder, self).__init__()
self.relu = paddle.nn.ReLU()
self.conv_transpose_01 = paddle.nn.ConvTranspose2d(in_channels,
out_channels,
kernel_size=3,
padding='same')
self.conv_transpose_02 = paddle.nn.ConvTranspose2d(out_channels,
out_channels,
kernel_size=3,
padding='same')
self.bn = paddle.nn.BatchNorm2d(out_channels)
self.upsample = paddle.nn.UpSample(scale_factor=2.0)
self.residual_conv = paddle.nn.Conv2d(in_channels,
out_channels,
kernel_size=1,
padding='same')
def forward(self, inputs):
previous_block_activation = inputs
y = self.relu(inputs)
y = self.conv_transpose_01(y)
y = self.bn(y)
y = self.relu(y)
y = self.conv_transpose_02(y)
y = self.bn(y)
y = self.upsample(y)
residual = self.upsample(previous_block_activation)
residual = self.residual_conv(residual)
y = paddle.add(y, residual)
return y
4.4 训练模型组网
~~~~~~~~~~~~~~~~
按照U型网络结构格式进行整体的网络结构搭建,三次下采样,四次上采样。
.. code:: ipython3
class PetModel(paddle.nn.Layer):
def __init__(self, num_classes):
super(PetModel, self).__init__()
self.conv_1 = paddle.nn.Conv2d(3, 32,
kernel_size=3,
stride=2,
padding='same')
self.bn = paddle.nn.BatchNorm2d(32)
self.relu = paddle.nn.ReLU()
in_channels = 32
self.encoders = []
self.encoder_list = [64, 128, 256]
self.decoder_list = [256, 128, 64, 32]
# 根据下采样个数和配置循环定义子Layer,避免重复写一样的程序
for out_channels in self.encoder_list:
block = self.add_sublayer('encoder_%s'.format(out_channels),
Encoder(in_channels, out_channels))
self.encoders.append(block)
in_channels = out_channels
self.decoders = []
# 根据上采样个数和配置循环定义子Layer,避免重复写一样的程序
for out_channels in self.decoder_list:
block = self.add_sublayer('decoder_%s'.format(out_channels),
Decoder(in_channels, out_channels))
self.decoders.append(block)
in_channels = out_channels
self.output_conv = paddle.nn.Conv2d(in_channels,
num_classes,
kernel_size=3,
padding='same')
def forward(self, inputs):
y = self.conv_1(inputs)
y = self.bn(y)
y = self.relu(y)
for encoder in self.encoders:
y = encoder(y)
for decoder in self.decoders:
y = decoder(y)
y = self.output_conv(y)
return y
4.5 模型可视化
~~~~~~~~~~~~~~
调用飞桨提供的summary接口对组建好的模型进行可视化,方便进行模型结构和参数信息的查看和确认。
@TODO,需要替换
.. code:: ipython3
from paddle.static import InputSpec
paddle.disable_static()
num_classes = 4
model = paddle.Model(PetModel(num_classes))
model.summary((3, 160, 160))
.. parsed-literal::
--------------------------------------------------------------------------------
Layer (type) Input Shape Output Shape Param #
================================================================================
Conv2d-22 [-1, 3, 160, 160] [-1, 32, 80, 80] 896
BatchNorm2d-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 64
ReLU-9 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ReLU-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
Conv2d-33 [-1, 128, 20, 20] [-1, 128, 20, 20] 1,152
Conv2d-34 [-1, 128, 20, 20] [-1, 256, 20, 20] 33,024
SeparableConv2d-11 [-1, 128, 20, 20] [-1, 256, 20, 20] 0
BatchNorm2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 512
Conv2d-35 [-1, 256, 20, 20] [-1, 256, 20, 20] 2,304
Conv2d-36 [-1, 256, 20, 20] [-1, 256, 20, 20] 65,792
SeparableConv2d-12 [-1, 256, 20, 20] [-1, 256, 20, 20] 0
MaxPool2d-6 [-1, 256, 20, 20] [-1, 256, 10, 10] 0
Conv2d-37 [-1, 128, 20, 20] [-1, 256, 10, 10] 33,024
Encoder-6 [-1, 128, 20, 20] [-1, 256, 10, 10] 0
ReLU-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 0
ConvTranspose2d-15 [-1, 64, 80, 80] [-1, 32, 80, 80] 18,464
BatchNorm2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 64
ConvTranspose2d-16 [-1, 32, 80, 80] [-1, 32, 80, 80] 9,248
UpSample-8 [-1, 64, 80, 80] [-1, 64, 160, 160] 0
Conv2d-41 [-1, 64, 160, 160] [-1, 32, 160, 160] 2,080
Decoder-8 [-1, 64, 80, 80] [-1, 32, 160, 160] 0
Conv2d-42 [-1, 32, 160, 160] [-1, 4, 160, 160] 1,156
================================================================================
Total params: 167,780
Trainable params: 167,780
Non-trainable params: 0
--------------------------------------------------------------------------------
Input size (MB): 0.29
Forward/backward pass size (MB): 43.16
Params size (MB): 0.64
Estimated Total Size (MB): 44.10
--------------------------------------------------------------------------------
.. parsed-literal::
{'total_params': 167780, 'trainable_params': 167780}
5.模型训练
----------
5.1 配置信息
~~~~~~~~~~~~
定义训练BATCH_SIZE、训练轮次和计算设备等信息。
.. code:: ipython3
BATCH_SIZE = 32
EPOCHS = 15
device = paddle.set_device('gpu')
paddle.disable_static(device)
5.3 自定义Loss
~~~~~~~~~~~~~~
在这个任务中我们使用SoftmaxWithCrossEntropy损失函数来做计算,飞桨中有functional形式的API,这里我们做一个自定义操作,实现一个Class形式API放到模型训练中使用。没有直接使用CrossEntropyLoss的原因主要是对计算维度的自定义需求,本次需要进行softmax计算的维度是1,不是默认的最后一维,所以我们采用上面提到的损失函数,通过axis参数来指定softmax计算维度。
.. code:: ipython3
class SoftmaxWithCrossEntropy(paddle.nn.Layer):
def __init__(self):
super(SoftmaxWithCrossEntropy, self).__init__()
def forward(self, input, label):
loss = F.softmax_with_cross_entropy(input,
label,
return_softmax=False,
axis=1)
return paddle.mean(loss)
5.4 启动模型训练
~~~~~~~~~~~~~~~~
使用模型代码进行Model实例生成,使用prepare接口定义优化器、损失函数和评价指标等信息,用于后续训练使用。在所有初步配置完成后,调用fit接口开启训练执行过程,调用fit时只需要将前面定义好的训练数据集、测试数据集、训练轮次(Epoch)和批次大小(batch_size)配置好即可。
.. code:: ipython3
optim = paddle.optimizer.RMSProp(learning_rate=0.001,
rho=0.9,
momentum=0.0,
epsilon=1e-07,
centered=False,
parameters=model.parameters())
model = paddle.Model(PetModel(num_classes, model_tools))
model.prepare(optim,
SoftmaxWithCrossEntropy())
model.fit(train_dataset,
val_dataset,
epochs=EPOCHS,
batch_size=BATCH_SIZE
)
6.模型预测
----------
6.1 预测数据集准备和预测
~~~~~~~~~~~~~~~~~~~~~~~~
继续使用PetDataset来实例化待预测使用的数据集。这里我们为了方便没有在另外准备预测数据,复用了评估数据。
我们可以直接使用model.predict接口来对数据集进行预测操作,只需要将预测数据集传递到接口内即可。
.. code:: ipython3
predict_results = model.predict(val_dataset)
6.2 预测结果可视化
~~~~~~~~~~~~~~~~~~
从我们的预测数据集中抽3个动物来看看预测的效果,展示一下原图、标签图和预测结果。
.. code:: ipython3
plt.figure(figsize=(10, 10))
i = 0
mask_idx = 0
for data in val_dataset:
if i > 8:
break
plt.subplot(3, 3, i + 1)
plt.imshow(data[0].transpose((1, 2, 0)).astype('uint8'))
plt.title('Input Image')
plt.axis("off")
plt.subplot(3, 3, i + 2)
plt.imshow(np.squeeze(data[1], axis=0).astype('uint8'), cmap='gray')
plt.title('Label')
plt.axis("off")
data = val_preds[0][mask_idx][0].transpose((1, 2, 0))
mask = np.argmax(data, axis=-1)
mask = np.expand_dims(mask, axis=-1)
plt.subplot(3, 3, i + 3)
plt.imshow(np.squeeze(mask, axis=2).astype('uint8'), cmap='gray')
plt.title('Predict')
plt.axis("off")
i += 3
mask_idx += 1
plt.show()
<?xml version="1.0" encoding="utf-8" standalone="no"?>
<!DOCTYPE svg PUBLIC "-//W3C//DTD SVG 1.1//EN"
"http://www.w3.org/Graphics/SVG/1.1/DTD/svg11.dtd">
<!-- Created with matplotlib (https://matplotlib.org/) -->
<svg height="181.699943pt" version="1.1" viewBox="0 0 349.2 181.699943" width="349.2pt" xmlns="http://www.w3.org/2000/svg" xmlns:xlink="http://www.w3.org/1999/xlink">
<defs>
<style type="text/css">
*{stroke-linecap:butt;stroke-linejoin:round;}
</style>
</defs>
<g id="figure_1">
<g id="patch_1">
<path d="M 0 181.699943
L 349.2 181.699943
L 349.2 0
L 0 0
z
" style="fill:none;"/>
</g>
<g id="axes_1">
<g clip-path="url(#p58ad9a7e6d)">
<image height="153" id="image6a21407320" transform="scale(1 -1)translate(0 -153)" width="153" x="7.2" xlink:href="data:image/png;base64,
iVBORw0KGgoAAAANSUhEUgAAAJkAAACZCAYAAAA8XJi6AAAABHNCSVQICAgIfAhkiAAAIABJREFUeJx0vUuvZUl2HvatiNiP87qPzKysKnWzKTatNgk0YNMUQMCgKcMaGIIhzSRrbI0kAx7YHhmGQcCA/R/kIUH4B/SYgDywKUpNWObAzZdksLurWZVZlY9773nsvSNiebDWioh9MusUbuW955z9ilixHt/61gpiZgYAZi4/OUf8+Z//Jf7ZP/2n+MM//EPsNiOICJkAIgIAeO/RhYCcM3LOYGaEEMoPAKSUkHNGSgnee/RdD0dAYkbOGY4IOSbAETKzXCNn2Mt+DyHAM5Dsb+fhnEPXd2BkvQ4wDAN2ux3unz3HJ59+jtu7W3z66ae4u3+G3X6DcRxx2N8ALPcLMMgRuhDKcy1xwvl8xjAMiDGCiNB1PbwL6LoORA7eB3gv90BEYCwAE7puhPcdyANdGGQsfA/nHch5gKhcx16Pj0f827/8C/w//+b/xr/7t3+Kr776Am/fvcWyzAAnvU+UMQYTUuIyrgAQY9R5k/dSkuNs/OVYIKX84RgzA5C/iQjIDAJkrgH5XyIwMQgM1uOI5D5SBn77t/9j/PN//r/h+9//1dW42LM6Eyw78OnpCb//+/87fvM/+g38+U/+X9zd7OCDAzk5iFQgU4yY57k8kA1E1PftxcxwziHnjHmekGIqN7DEiIUzlhjL8e0gmYA555CTDJZzDsMwYLPZIKeMeZ7LIMuDJx3IhNPpiGk6w3sCMyPGBcfjA+b5gmW5IKUogq7XXeIkg6L3K5PEcORXwsEqpDaRJowpJeS4AFkWas4ZKS7IMYFzM5HlPPbMWe/bgZxbTZQjB0fynvcezhNCcOi6Dn3fw4eAvu/L3/bjvZfFqf+GIIvkY0Jg8woQiAKIHEAEDwJngB0BIDgGPKoMeAL6nvDHf/xj/PCHP8Tv/d7v4d27d6u5Z2aEdlWdz2f8/u//Pv6n//F/wH4cscwzQienjTkj51Q0mE0GESGqkBARnHMAgMvlUs7bdV1ZaZdlRp5ZB5DkYWwAASSIQCzLIoLjPMAZw2YsA+gcsMwzGAnzPIPIYbsVbeu8x3a/xf2zW2w2W4TQYZkjdts9iIG4JIAZzgM+EGKaRRupkNok5Jj0GQnkGAAhpQznMxw6HcCElCMADyIPgMDOI2VCII+cM8gTCAyS2ULKGYEIRA4RjK7v8eKTl/iV7/8qjqcHvH3/dbkHzgRU3QEQIRGDiZFZhCR4D2YPzhmcM5wjhODR9x1ijEi6OOuiyfCeihakxGB9ftPuDICYEXWhOyIwgASAVL0RsyxQBmKc4R3hv/mv/xmmecI/+a/+CYZhUA3pEGxwY4z42U9/iv/1f/mf4ZyYBNabYpaTm1bxvq5s01QAyuCU1aFaQh5MjiHnEADEZSnmI6r2ciBAtYOZX7vO6XTC5XLBMAzYbkfM8wznPYZhwDAMmKYFfd9jHEe8efMGm+0W2+1OBcfpoBKADO97EQgGYlxWK9s5BzAjxojQdyqAAJBlhbP8LVpMhC8EB6L6zERcNCqRg3MZzA7EGQQHBsuYOoe+77Hf77HZbMQyxSQTCJLJx/plmteRg3cO6uuAsxPt4v1qzHPm8neMcSVw9v3EVRCJHYCMnOnqyld3YuNS3iYsy4L//r/97/B3/7O/ix/84AcyXoAIGQA8PDzg7//9/wJxmkRlq8pc0iKmIHPRRiZQJmSmCWylmE9mwmYmsOs60V7MCD4Uk+hVMB1IhE/PK5rElWtsNhuklHC5XGSw9XqXywVdN5Tvv3jxAvf39+i6DtvtFsPQwXuC84RljgB1SEmeQ+7Xg1OG7zoZfPXVCE5NNcN5MYGAU3OSQSB4HwCwPh8h5QVBfVVZcBmZEyiLIIIYmRzI2/gwQh8wjiO22y122y3OxyfVQqLJ7PmLoDknyyWzahkCqztjc2Q+nAmZLFoTQC4Cx8yIWX23mPQYrBSEzaN6aVXsyMRLfulCh5gW/L2/95/jxz/+Y9ze3om5JhIJ/Mu//EtMpxPIEThlRLD4EU6lleRkZlLswvZgzrniy9QBbgbFAobGFEEdfyZZ1akZHDsmpVS0ZyvQ5sfFGGVy9js450WwNlt4/b3rOjjnxP/yHgzxDZkZwzDgfD5jP44g1QBiNlTzOg+AkTkiR11YGSCKADkQJGDpOg/vgJQjiAJyWsSM5SxCkhOYgpg0ynAuiMZwEB/Le4zjiKB+1TiOiMsMTgs4E6qbrpPNMt2qx+BIJpxhLotei8X0VYFzSJmRU4YPHpwZmTNC1ADNJzCLEFZf06lwMZir+bZzEzOIJSjMOQOc8frVK/zkJ3+K3/qt3xJNRkSY5xn/6e/8DvbbDbwnIGfAOThPSOW06kPpTZMJURM4LDGWVWYrKsZYNJsJR1HNRCLUzMiqvjMYXgW5FSoAxQRxFsG7XC4ggpjO8wW393cg7zHPETnLypABznCe9H25J2Jg5kn+zRmd90CKxcQwZYAWMFczKveXxdVQ8+gdIScCsxwbfAcwwKwLiXsw5FlSVpeCHMAZyKRmz4PIYRxHbDYbeO8QgkNcnPiBDmCWcfAgMEn0BzVHxFXHlKjZm5kHOHhkZuRMcBnqkmRkNZPe+TK+rSk1vzhnNNpRfooGs/lhRnYA4LEsC/7O3/kdvH37DofDASHnjP/jX/wLPH/+HGmZVr4FiICcdHVXrWQ3VLSUPpxpuHbVdV1XBNP8hzZAsIFpNSBR9ZFMGC2qzDkjJy6wSMoZ3gfsdzt0XYfQRFvDIDDCsiwABdFmTv2z3JxDhTcE09Aowm1a1e7RtHXRKgAyMTyrJues2k6c8BgXuMBwzACCmHgQXLPwAAYRcDmf8e7dO5zPZ8zzLNchCRzkeq4iATrBOqgrc3oNk5if6MiJwGZG8BrZ5gwODPCHQmYLvQYPjJyT/mR1OdpouSqFrgv4gz/4A/yDf/APEGKM+Mf/+B/hcNgXB7869TaY8litSSy4WuujacTh9OZsQOxBnRNn1R6i/bz1wTy5IgAm0JvNpgjmPEWklCQwcB7juMEwjtjtdvjkxSd4+fIl7u7vixNM1JWHl4FM8FSDFLu2QAq8Eu6cM7quK460DWQ7RkRUtAJhAReNr4EAEzITPNWwPnOGgy0mJzhWWWRU/BybQvvb8EQHgFiQqxa7sn9bP85MLAjIJKYNENelA8SEqi9tiz3nVHy4lLKa0YyUHFKq77c+n/3Y+PyX/+gf4vHpiOC9x+3tLXKOooaZQXrXibOulmom68qgIhT2kolxQHFqZXjaVdHekA1gbnA6MSupnLed7MvlLOau+GkBLnTwoUPoOuz3e3R9h3mZcb6cELoDco5YFqDrNoD6mSkmMHHx1RhcgOXiP6oJBwg5iVbKjhB80PtKGmXaOHj1HwmEoMIq36Pk4dUtIOeQOcFlAjsufu5mM+L+7h43N7d4enyH6VKDHvG1xDn25CUwaQSLyJnXpOPGEDFEq2jALILlWEBdhmmVqq2L9soJXmGRVoullJFyQk4ZLTxy/QMAfS+4XPjRj36kbzo4D6QY4VnsPkGdW/U0i3CoRruOegCRR7uIDRBUuEwzFcHU6Ig8IbbnaGAVM7kVx3JYFkaMGUQJ4zAgI8N5CRwYQOg6mWCOyAwsMeMyEZwjLEtETgm+70UASHTEMs8IfV8EO86Lmt+u4kcNCEtlPMQxds7re4Y0VWiHc0aODATxcC3Q4SyYF5FEZuO4reaQxISJD+p0xcpsEAgQpAX2rupi+RqrdBGqniPSRa/3XnQjChTSZgJ8CMUsgvFRQZIIOBXFYShCxeSAH/3oR/B/8id/8rvT5byyp845gBp/C9W0tdmBVtCuTaP5AeVRuPwPOgfV/DIjN6q2YECN/0Y6SNVPEiT7cHODrutwc3OD7XaLw80N7u/vMY4DgvcgJ4FNToK620C0EbLdhw8VVhF4wsN7EyKNspr7MOECKograyeJgDiolrFjJPqCI4lO9YfBWOYZX3/9NV599dc4HY+Ies8g0VrVVzaEXv+jOqbOXftl9fvreTGfG5DHsu+56tbAAh4Byr3zCAqaWwbhY7/Xfx1CCPiXf/RHCH/2Z3+Gl588XwlJttDYiT/BWQdOb5QB0QaNo28mpn3llEFcQVtZwlUgqVlB16bYVkcrzMwSScqEStjvFNDcbrc4HA7Y7XYFNGYIThSXGZfTCeM4luAhaiRs17AI2DSuKwCuwikQvy6nrOPDBZUnX4MXG7fEEYhitkSjMzwEU3Ps1EeqC7brO+x3O+x3B3Rdj34YkJZFoBodW1nMBhslEHn52wmMYD51WSYNrIHm3/Yl46vngwdYrU2WeyfnkMFgx3AKQmfVjy3w2+Kkoskkwv+Lv/gLhO1mXGNTEFBUJUPVv7wnOI9Ifpkgu1msH8iiSedd0QwOkOiLSCEG1sFzIKqwhqHufd+DmXE4HHA6HQv+JIEDoRsGZAY22x1ubm+x3e3gvVMhEj9KXA5GXM6YKcNrHs6ASu9tEWj2gpwAnLnNPABECSBCilF9H4KDA5NMtj1zNZEAfGuCogK81bSBAXay+Lq+x92z53j2/CXev3+LlBbM04yYosAhBQvlAp9Uy+BV49H6faom8dsEDFAo1DAwE3yX4Vj8NTIr5mQmHct5Ja3UEiuyQiUJOXfgzHjx/B5hu92swvL2xjjL0sgsoCKcK6ujrHgiRclRLiiTJv9aNESaSuHMYEfCTCgwQdVigrCL1jRfbJomxChAIeeq3fqhx2a7x83NLTabTTlHzhEp5+KvIEd0ISDFBfN0wTBsyrVFG1NZlXAKz9j954ycHZgngAT9lzFwyJTBiHAOxafs+x4xJnjXlXEomjlnsEf17XIGuwwiLzDM4QYvXrzEN998hePxAY/v38sTrOAJVCFSYS+rhCrozWi/96FglQgUAKfGrTHzS+KSlKwCNWcrXyV4VBfGewlKUhYgmhKje/4MwTQSUMHOVkhSSgKYqvAVTWWRYvOeo5qhN6e9TKblNIsZy2ut11yv86FENUSE8/ksT5VFCwTfFZhkt9thHEcsizjqKSXMiznAOkA5Iae5mNwQegRep6wK0AvCUpLmxshIClVknXSLqLkAm1CBTjnDqw+Zc0bo3ErQXLPyzdclQvFt+nFEyozzNCHlJHhrXs/Jhy+upkTvw3xpXd6Kq314iAlc8asL/FF91m8T1Bb6EetgEa4XReEkK+G+7QTmbHtfhbDFY8yPKZQUtz5VYTOoCYnGfVqWwtoAPoRFWozHhNWA35wzOo32NpsNQKR0o1gS5SEEFWhR2zFKNDlNc/W9iDAvC5gqqp1SAscE1myA+RVl2iiAyMPSLC3u5tWE2ABdL5prf8VMM+dWaBh9P2C/2+Pu/h7b7VYS7x+ZG72KarVcHXlUraJ3vfrXAi4TQiq/lcf8IIAr1KDr6zdphho8ETon3D/nHJx36LogCfJiKjRctQNFnRK882oy9T4VQ7ILF4Q4hHLLhqyXFeyc5ODU0bT35boK4jrhTtnkFCpRmisoCsb+sBcHfl5wPh9xPo94enoCABwOe+ScscwLHLEEcgAcROCXnBFyxtBpXtTrmueMOTMCBXhICN9GjeRlEDmz+mu5DCyrv+ScQ5wXEfLAyOxAQTSu8wCTmtBiJllBVVlEQxdwczig77oCQzjykspaCRo3v9vilgDAnP8qVQp8qMbEh3qlkTEVOQtnIXQmOaeQFldSD0kL5hIKiNsETwjkinIIQGVWBk1bmCksgsBcMu6F5qOmxlYoAFDOAhvoU7YIudyf0Hfg1uxMqOC2qapxHEvKJ/MCRxL99X2PZVmQAXz3l76L7W6Hvu8xDAOICPMywxFEMFNEcB7IQo0Jviv3l1JG72tEKYi3JNNd8ICTxWURtGggeY6krgCrd78sC7Jvvqv/835NEqhRcvVbW63NBPjgcDw+4nQ+Fq3XLlQ5TiWBDT9rhbAK2MqXYwKYyuet5ms1YZ2Q5jvUwr+tjBX7LK6JZurtOS2DEOzmvfdAqimidtLJzuca+EFvoE1FfCxVtMJkIBDAkuKVWq+TYEK8LAsulwu894hLwjhWKs/zl5/AeY95WbBpzBIAeOcxL2ekFOGpwfHIwTWQSx86dM7XqFInsMAfihsVn22pxMwKVxhSnhFjBZzbScscRT/wWFI3Zg1WPxKuiaY3AuSV6WrH00BgkcIP538tMFfOfiNo1+7JB4fqe+b8X39GJNCGeaXtMTauwSaOgMKsaDGfVbSYuEAYdgLBlLyatVQnu9VicgKFLdSsxCponDJy45PZZIcQCi15ThFdEOrO5XIRnljfI3QB+8MW+8NWtdxUgo+UU2GODEMPy0AwMxInUBTNaGNn92wOMKBUGBfgGIhpKdE3q6kQn65OmPHmZPwqNSolRhcki9F3vboD6qYQaRbFY+i3+OTF38AnL77E+ekBcV7AuaZ9WqGTC1+9BzTjTh985gMJHtbM7fV32pfJRtL7XH+/vuXIQRA0Kr5mmUsb2JhSMZfXN2wn5LoExBZnXkVL8hChJMHXjqJgLiJUqZyb9eYF1vAg1lXTRKiu7+GIMG5G+L4rE9n3veTQYsZ0mRDnWfJqMYIoin/ne4S+QwaQmODJlYgveQY7oB/GmnBXYclJ7supwCQkpLxAsj1ONWyGDwExLgq3CByRUoIQHvXZjaER5Rpi7yzpHJWWI77qsBkx7kYB2ljZE+5b5qUZ3g8EEOpjkZNxX8EZRhn/NhVo0aplMFrTSwUYbrWtMNS9HOcdjGhARAhW9OFcLRYxE1rVuWJOJGYkmY8TPDzUbyOC7zpAheP64Y2BUXjntgLUue+HQc1pTY4DRoRj3N3fo+/7Alm8fPlSuGqKpR2fnoT/7wFwFEH2EoHFSOi6QaqN4DBPEX3vMPgRIYgzbmbffAl7z7RusmRxYrjgyoKySTVz07KCa8bCF1Np40uEoqVdZngn2jE44O72Bnf3z/H+3TdwLuF8OpUoeK196u9mps0KWaKckWR9N6aMRTy+RcCg6imWIKNCNvXztem2c9b78t6e00moYQxSO6hNZBeaNSrY6i2nmWqmwCtKb3+bT2eRZ05S9CGDxLAwKPhqFpm58M9Mk0kelTBNUkn08PAA5xxevXqF0/mEx8fHlXAQALCwBeY5YZ4t1eM1DydRo+XpTCicc1iWZeW02mdCRgwAPJwPxQ8j/cysgUXUtshYiZNGB7Jn+iDoySylaHqOzWaH29t7nM4XnM7nWlTzLZrneqxDEX6JCtfycYXzlQxChSJEoXgQBQDuo9dtQV+iSmhtn8nOFSx1A5QAtCRHS/Rj8ENzcos+PVCot2Zy7Hvt6hLnm5FyLnWXRIQlKzPTAVFNSFl1juD6Drf7Pe7u7tB1HTabDeZ5lrKwPoCYkaJo46enR3BO8MTwzqPf7tD3Uj4XQleSt1ZK1vUjYmIMvVREjX2HnBNc18N4UoBhXYDvenmOrMn2HOUeEbRsLQjAmsXPhKfCtEhpAdCtimMMpjBBIQJc8KAwgILD3f0dkCccnx4wz40mK35hDZpaQahCrGjYletTrksZUoMkaSmwCVa9LxG2gpCWqLposua85uYUQdRfi+P/sVcRDiVAtr6BPRgTVpScktgGIWaL+GraqgUq5fxipkOnEe2Syrk61TZW3zmOo2g3CoXEuMwzaBFsSnJwWUq5OldMlEEifT8WzeOVMdv3fVkIswKxobdJybAxawtnbBEx1zxrzhnzMotP2kaYTRK85ci1/Drnqt/rnMfNzQHeEU7HI87nCSnV2tUygWhA1mau2ijxW7WfsWOITARUgABmdXUowwDAKjfropbqDnzoM7b3FVrJM4FwqNqnlHa5dRhcByug04ruxDVV5BgIWpSaIQ7+0pgKmQiBNbxCCUbBCV78I3iHzWaD25sbEYJ5xmazwTRN2PKAFAVIJgjAGe2hg4frOjgX4JxH3w/FPBYh05+a3nLwvtK+r/3TVTTMAkzaeOac1R/0xSFnZoS+k6iL6+JaO+7VkxG/xoFI8sHPnj3Hy08/gwPjzTevSvFLIydQMGuV2lvJ0rUGU0ExAiSp9gKtPxehEY9OJMytzvWt2NrVde334qWGIGG6DVqrcUgwyCJwzjmNJmqkkWMC+yZoAABHWJRX3/pv9WahbFyhInddJ8wGtedOTZzlJU3rbLaDUEkWE1gHT4S+60Deo+t6bDc77HZ7DINosK7r64qzcL5UU6vAqzklZJCv97zWXlZwTCWT0Gl9pnfCXwMBoRuQEzRT4Ao7xLRrO6lVeOXvYRiwGXeIMeN0mUTw3Lp6i5P4hEytdjWhNU22Npf1OwZPAUBeaUTzK733SkbwV5GrCZwdw8XnLE90pdWcre6WT3UtqSlpYahGEQ5rPyArBcjOZWAmoJpRI9frouCcpepJtKRQfIvbQEDX9wihx/5wKMlvcawXnDTiOp2OhbfPKih91yt3bMQwbtCZloFgga2wk2kCu08SwfDewanpuzY7YgYdyAfxV3MC4BBCB8Fora2AlPxLMENF4LTSpEyWaDHJtIAYfR/w/JOX+M53fgm3t7eFN2eTN46jAONurUm+HZJAyVMSrn20NT/fBE3ID1Ug68sYwkl/lJ61huVWr2BIe9JcZGvPq2kQjeq8IvJYh8oldE65lLgVHyJrGZlzyE3hLjOv/BxmQtAolYhAwWO320ui2HuwM56Y8KsMchjHDZAYFAJCN2Cz3WG736MfNwhNpVTbDOZjfoP9G2NC1wvijmbhtK0YdNaEceBr8tjK/yxCJkIDzKJQYYh84/TrtQEx0SyTXFIyyjK152iFrQpFOcOV1llrnIparDXNt2khO74k2qmei1VpEonQGbGq/Z7JQTAVDAa8rWp8+CqOJtXV0GoyU/vgmoohImROYjaoIuJ2cYMQRGgVPjFB8KIhlmWGpQCtkYhzXTE9RMJNk4KSgO1uj34YETptRtKYyZbxYf+KaZe8pLKrwCzXNoLhNT5VJsK3KLzCLx+ZMEC5cRDAWWCUdelfTU4LxOB9gA8BKWdE1fgmeO0YtsHAh84366xp7tGV3z7y+jBIIPOTqH0WKsJVjnMEcC2NtJdZC1e0EEsNHtxagMQ0VKYBmsmyi9hKKhcg6aoTU1LU+mMOKMp1RUOSIN8WxqsZSnHBNAmfzDSZXV9oO4DvgvaT2ElS22mbhaZvxfX1bSGIeyCNUzgnEGoLpsy1dM9+Wqo4WPAg86ey4WGo1zOsL2l3IO+CNEtxYSVkOn1aGyla/MWLT/Ds2TNs98I6kcBE4BDRkKlkFa61kf0HCJjtwPBAYbm2gmSQxMcEzcxYKSRXig+3X2BVTKqFGXklPyW6bM0I5YpjGRZiAF7OuQBv8n0Rde+9ZSHqZLCE5ERr6KJw1UJAUL+mpUWXlBQzkqtRYdBCD6kDTBgG8VWGccB2u8M4bopmtOtYCN5Gy/ZZMT+Z4b2luDyQZdQyV46+PXtLtgTLMS54jfIU8sgZcB4hdGXcylhq8YhFd/V982tEu0pxshAYY0xSZdWQEHa7DWKcYXanNelF2OTpi0QQVR8UH3zTAoFrc9t8S8fww+NNXyrwTWvcLngX4ByrGVInNNZEd1nFrR1fXZzKzXu9UVYBu27Kcm2mgg/oukGqelB9DhvMw+FQwNPW+XYuIGcRni4M2Ix7eN+h6waE4BFCBYFNoIC6cJjFtEmyWlJhTIxM1echEuGjRlPIfVfcqEAcaFNwddHmzOh6mRTvG8YDoLBB9aOMnm7aRJD/rVZuyTh5cjhfhDfXadkfc0sBaubIgmg25aOmXxdvK0TFmuVatng9b+34F7NejoVaLNFiaMZex6LTiTDbzUiUV5PUYj8EiDNPVdOxmVGuk1HyelcPX6JKIvjg4TvBhZBZtVotUdtst1iWpTjwlhi3ye66Af2wUc6+aRwTRK9atNJb2meyMF20lSTEfR9kcTgCHEBtHwwogBklhUIWiV4NaOYoGQySQAgsTFxZzEHXp2jIGuGiOtdmGULAzc0dXrz4BI+P75HTjOPDUibashDQhR0Lx60FTGVGMymLyIT5Soiuo+dWAA2Qrn+jWLdGGmE20zBWNGSLUCdfzENhqLZZgManMtttDrG8XFFuNqF2bQfpSrN6eJ0kS36DCD44OCdcq2Hoiwa7JvsZJXq332Icd9jvD8W8mKl1hRRZ77elGZnglESyCrw53BZVrgZcJ420eMYWg0Ef1kdDil20i5ALYDjNe1K9H98hCfSKYB4HZ2SWCihycj/7/QGb7RYxRVymCXO8IGtbp0U77nCWcrp5vqy0DsoVFYss9ZtVk1UZuQ4YrrG3MhKNoLVvWzCxVkY236HFRExw6Ip810Zn1xBHkcEGd5LKzSxsFRaud9ty03tfqrOdZgvIAT54zTG6lUDYKrXBkc89NttDiTjtJUJpKZvaLbEV1FajMqO4CYD0YRNKtrJIc/VlWgG1CanZA8lbOgc41wHk4PX5KuxhAYN2/Cn4ogOz5E9hpg5OcL5hg5vDDabTE5bpjLic4X3QCnqJVqWQZirz8q2BlmljkQuYuhAsULoIrJABZYu0FsLOZHhD0aao1s857emmrbfCitGqDYK9qsPr/hVr+g8XQcyo7At7AG9FqDkjJQVxLROg55aIKYgQ2Dw0CPg8z9IWSrP5xV9yPcgHzPOCoe8KKwKotG4iIRR6XxF2G7gqjMZ7A8gRUloQgpa0eY/OBy170yip5ZvlrAxW7cnRCN4wDHDBwbluBUBn7fGR84LgAsDSU4J8EAOahfuVIXhj3wfc3NwjhB7zkopgAdAeHnLOlGJxyE0Urs14ealGZ+s7BxTtX/AvfOj0ryNPM1VXste+mpxtyW9Y7jE4r30asjAJmpteRQwa6dnKtldMCR0BUfEUdOKPUM5SgQwUjr8Uh4ipDZ1OICplyJoOExEG7VMh6aUB3TCUwtwKCGdYHaIIk5igGOOqpN6unXNG14tJY4ZGR7rokhAVyResB6WdZ5b+r6QNWCywIB8kfeVxU0SAAAAgAElEQVQlgux7YXNI9yFGigk5RYCCCBJLWiqlKM+SJUKTajRpZfXs2TPs9zfYbnfSmfsxIqUFm+0epxPAfMH5fAFlBgoiYEU70ABgbQZl/iud2twlA1aJDH+rQRDQpo/kxDW/e+WXFj6bNKwJLbc9G15CFu7nYt5aU2OrpLR2aoLa4KVwwySfAaFXp4QMIfoFLdAwjRCCFmGoiTNYwXAxsPDRfAjohwH9OKLvOvRDb0OnAmU+mWonApgTKFTBK/WVgjWA2WnXHmFzJmOOeAdmi6DMB2FF7F0xwdYnAiCxAKTWAFDhFuCYIIXJGlrJ9SQUh0AZDvBN2kcX3jAMONzc4HA4YIkXUMyY5iPiMmFaJtzutzidTghkRSUS3cN/zL9q3gNDu+qs3q8KQ90Fg+Gppfh86DK1EW6rkJgBVzpPO+n4xwC60MPDSejb+CA1oUyrNM0QahUQEWEx05kyXJIrFT+oQasBRoyzVPvoyokxFTOZc8bxeGxQbmCzkV4X2bA0R5hjlIGAQ4wL5lnap7eAq73a3miWAkuJpd9HAVolUOBkxTR19TJntKwFOYZLKsgGdxxHWGJbzLIKOIQCTgzkmMr1gFQiOSvqdeSECTxI77UYI6b5IkDxEvHZp5/i5uZGLEWJglvYqUW9muDLBM6t368vIzbac1/7dx/ayetz2Lgxs6SVWnLhtttIZj9luK4H0rLSWtcnLf6Zc0iwgUQ5JqeMxvyvXjFGDQCEFx85FuGw1qCDMi+89xg2G/gulIkzJqskkC26q8W0FfOqlCUocdK7ToQIQN/JphNgCwgYRAnkCch1wMqCW8EPNtjVJ+tChxRlIiwh7r2Hg91XbZDHnCCQJMFDesp6DkhRMiFjP4IQ8Fc/+6lE/46kMgqM49MJn336Qu6Jcw2QGnzs+kWAMjfWc7l+FhYLQJLua5UCAF1UH5lQGwlXmdEA4Nq26d577Kwt5tCLyiVhJJgW+9iPCal3Hh15OH3AnDMihAlrNyrXE4fZePfWA2yeLiVtZGtoWRYcj0eBNLoRm3G3EvIYI+bLpBhaKHSgFmiUgTHmgGorVsA5M+Z0KQyPZZlF63DCnGZkKO0pA1k13vViu6Yot8GFvW9RfNWqBq1oloClOV7Ki1RIkQMyo+t6PHvxAt//lb+FcdyBXNAF73A8X4Rp3AQ7BNtgQjZ3cFwxOACAd9IpGwDFDMq8us/2uaqWrXiZEDVbAqfAKG3A0JZZqtWrxQ3b7RYANOrTwlo4OKoA5rWUrl5ZuioXyIIAj8pHYgDeB93EQTIMcm2Z4JSl0UlSMxVCKNz+0+mE7f5Og4UAIJe2BLZil2UB0do/SCkVmCNl2Sug5eQDWCH2IjAijJaqccGjruJcFheg/P9QeWnMkk3IzAiharFpPorpKL1ixR8sfi1HOHYgHa+olU2h67U96TO8efMKp4cHxOUM52UxPz4+SgRvwlCg/pXIfLAAPKgENSsw1pHUG5Cl44C271k7BhKkYaUyK55Zd54J4ziu0i19P8BrysW5gOlyBsd5BWFY+qGtzTRTstqEINdWBlA/bhgGpcJUG55zRlwWZNUmhnNdsyfGcdRJovIdZmn3VAtRPiz1KvsLdHUFZmS4rPUJ0e5DBDIpxsMkGiVFE6y1Zqx7SGV4X8fhcrmgHwcscZIoXIOYTgtm6mQI+8N7D2L9UbMcgvLfgsd2s8f5dMG42eDlZ5/hm9eEx8f3uL+/w8O7N9LLbJ4+6DQONFqsrR0xYcwojOfy/UZTOzIsTAS3eqb6/fJlWmnLa5wutCsSAJwP2Iw7pJSw3QJPj+8xnY/IKa6iMxtoO6nhUtXRls0VpCUoF6wpJYb3WUiBDTfKJivnjGVaSqsEKR7usNvtVqaHcwVp2THGvgcjS2Bb2h7VxSM+lzShs6gXLoCRtQCiLhwBRxN6Ly2QMiVpLqx+VNtMpqVyt4vNFodBKa2w25jJ942NIbhaPbZCBKHr8Gu//uv4V//6LV69+iukRZrHPLu7xXR6wGYz4lHfa5+ZFEbSwHgdBNgcsmQeiOrnZBCF2HFJu7E6+xZum3nEt2nPKnCBHKHzlr902G52GDd9YxaAJ0dYLhfVbgnmS7RMjVbzyPcAcoxpyUIq1BL8EskayswVqmAGliUiLVH78UvkuT/cYrs7YKfm3NpGtry2lDJCkPSUDiOct4S2OLIxRd0JRTWwk7YG0hrK10FLSQqNUwK7APisQroWHhMu08rmYnRdhxzFR0SWjcNaQRTQOKl/Kh28ycn2M0iW0ZAUX+c8druxKIG7u1s8Pci1Ht69A1JE5we0O8C1RFJmlqZ1Sgtv9nsQ08hcqowKlZvEl+OcC4hehI9VaInhraUVJJfdClqrzQJzQoYVnoofsSy6Y0c/Yhw28I4wjyMulxNisytb65fVlUnaCC5imk8Az0gEJEh/L+9qIartRtY68eIWifk7nU6lM7ZsXzMU8p5MqgYkRBrtEFJUv4ehCeqEvh+gjb4Q0wznJckeXMC8XEoaCyQ5VRiUAOshbYvomqgJTNMF3gektKDvO4TQS2TmCIyo2r1mF0wDViFQ2jlkYy2ZxARiWYSk4/Xd7/4y/ta/9zX+dI6YzicQebz+6itsNzKGornjChivWhwgXtMVi5/MBsvIrnCZ7D0ICYBblVUamYJASHpGzYl8K/gfzuezSr+asyhFFPO8YBxH7T8RELoOA8ZyIkPRW2qOMSRsNzlQ1vZMTtp1s3Sfdj7BkTAScqMZsvYIiznJNjgxoR8H6dOv3LFe97Q8n8+a+5Q7Ej79OrqRJJWa86baygTl6ekJ/WCZhmaQyAGaMqpmuwY8a9gnA5RFE+UIRFk8/RCwLAkhdCUrUFNCdavFlCKIpRCGc0Z2snMJUVQqum69OG6w291oasnhcNji+PgeS5wUZ6xtutrImvS80AVzXT8LmI8lm4sV38sE1OhORYnUZtUmfeZzm9C2CgcAwuVyKkJiu3ewgrCX8xNCCNiMW0321obBVtJmKQgzlzlHhKBRGTn0XYclLmB28oQ5IUVp71RbK4k5y2kRDMbMoPO4u3sm2tRL0CD05Q7jKBiTI/H5pDq8ppfKas2VnSr37AA4bLcb3QjMwZO1UReN4IJHipX9Kg17SYthnDQCdur4luu5Mm6iyefC9QdM8xmo3RWBEDRC+4q43Ai39PgyQen7EZ++/BT7ww0up/f43vd+CT/76b9DzFGCJkB3PSl5CVCuPXizhoptUx0TNmHJAJ4ZFBnW1I2Za+ag+M4G8daggCFjQFeCVjTZdJFwGMyIYEzTRVUlCY3ZB1ymC4Z+wNAPyHGpoB8q+9Q5QuYoiPsyqXZKuqlolDIyJ2Bp3f4vrHwHyblFabgXZQIPhxvBxqhqJfH9rKSNwDlKV0KysjHbsCo3k1lbLxAcTqcTnJMMQxdqMDEMI5Y5fuDflIwFZzj4VRTNbLWXA0CKfWXVjU3Ng/lstiBli8QFxNrWqsyfOOIlgmXSApUBn3/+Ob7+6heYpsmc6hWyWmNKFRZeN0zJ1xFA0UcqHA2A3YpU8cXKe/VKlqUs52o0KTMjoKHvTMsMkDT2JSJk55F8B7csSDEK1147DdqE+SCDQ0RIOaLuvcMAazoIDKG3aC9VzrB9jlrzlWwFUhvFeTBVUzyOoxIOnSbXqURRBhhaAGKC1S6IlBKGYcASVSPr1i5934MBTJe5wgBY06OJSDtBWsoFxZckqpXodr5rLV+yI7421pPKJQdnW/MAyEmi8gKLaIbh/v4eLz/7HON2gy+++EK1kBTSiIBQ8ZFYgdiVIJFkMFrnqY3ufaPlWp+toP0F/jBdpoAyCIVCbMJKNZ0XrF3jOk2UAfWXwOYMJiSnBa+oobrPhj1B+z20N56QGIBGZlT4VA6cE1KMq8Z0tiJtqzzvBa/b7gTlt0Ajxhned5jnBd4DwQuTwfj45iuaYLW+lHMO0zTBaMuuk023GNA2CKl0+yayaLVJKZVd3arPY5kMu7++HyVIcITgrCyP1Xdbm3Q5vrYKsOS2nbdqbad7mTu8/PRz/PUXfwXZfxPwoattvWxhwKJBU0HVi4JmAdqmdq3T3qbMiOp8Fm1HpJ0b6/ull4UcuDLHjo1guCryFCKF7Hu4CEYWI+I8IcZFdvTVIlvZz3vGskwlNWOrt7RmN1dRL+ydVM8wJ3BOdWAUf6FmNFLOZTNV4Yd5jdgUpNXJysmKV7JCEjXSbYMT+1sAVOHhCyvX43w+q28hzq7swlE3tW81r3O+0H8cuaK9xcxFgTz02UsnIWe9Z2XhWbAl8IJsvmqdh6wARVI2spUiEWPoevzNv/mrWhUvfLXtdlu0kKhC8UWrECheRkItN+JDGWUd9zL/aMxkI5jt70LlVpmSyAFtzq21UAFYh7Orahz7jIHEtbe9jk5ZIUSGYuPDF6lWKCZGhJhJd0lrrm9gn++8Mm1lRY7jiGmaCsV6nmdsxhFSl7ku/Rfz3WJ4rgCzovqrX1QAS5CaWRMmE3ihhJv/VD8XDpz3TX8ymIYj5MRYENEFhxCqqS2usy1kzpLFSYuUAhY2Ra3ot/MzCz54e3uHn/3sZ5paE790v9+VDS5Mi/EHk9GUB5LBDlV+rlt12icWzJDha/SxL3Ld1BVrrQjUli4rAWnD0HIbTd9Rwa4YBvTajZOepzqOTgXw+rKyuZP0YZBaz5oPVCDUubKxvWlH2TQiYplnxLgICZBqBsLMkF2t9uyqC+dymYqGk4BAMhzLEgtVxzAxhiHvGUQBts9QSwywJV7frxkA0t6vy7IUYoBouooxRo6A43KsjX8brLS+3H6/xzgOZdyd1gM4cmVj+laL2PEFoVcznHKSvRmIyp7nzQElfSTAq/2vasdW2DJaIVVvrRGoUH+vN9hqtnYFrvCR9jjdHuWDtUOMUoxJ9QZUpwMcdfUFeKdVQMSIKaJ30s8CJI3vjC3y9PSE0HW4TBfxg6KE/hbh2f6b4FSyCpamGYYetTW6V0G3AhJLC2n1FEMCHA8Q+cbngQYlQSEd4bI5J71zQ9chOGHIsmZGxJci2bRCAUzOQrHOLEl7uyfRklyAVUb6cI2C4UOPYdzifDri7ds3YqLVFNpIswUA1jpC0cCiwTVsNJuRqSl709RAmbYipfK/GmHSSsEQc/G9s45VCQmupb88ThMMyCQ2UYaqfettbyAemn9FXqtTaslW+ztn8e88EYLiUM5LEYZttpVSwu3tLYgI0zRjf9hhibOW1AU4bz33A4LvwMoBM9qQ0Wn6fijPYzBMzhkGSK+eialoHKsSFxehrcZicNO23LQYKYU9eK/PImVxhh1KZlqO8Yr7eW9FJ5bH1HSV+soxRszzXAKOm9sbJSEwvvzyy6JhWE2hmEDSzEMbEaI2DgbUdzNZUStFVMilZn6LBmsUmZlZhiEObKZMtKFG/C7rAF4zKtqX/E2NILY6i8pF7fiPvVq/ixqhK5MFaejiQOAkNYt1o3WUpnc5Zwz9AO877LbSkGXcjGVLaNso3mg6tRVmjU4tV2rpsWVZZAvp5j7b7+bGoWXO2r7AgE47RnKAJedLrAUedcH1/QgBbT1CGOBDj9AN8G6Ac31x+E2zWhBQIIXMmKdJun6HDjkzXr58qQFQNYVyjkYiIIwK23ALzT21eU6DP3LOqz1N2zm0+4AplSZQK+aVa9gDwPZWqvSTjwmY3YDZ2mttJSGr4GD1b15/B6IdvJMw3ZxZkdkM5z26LiAuCeBYose4LOBhwNdff43vfOc7uFwumKYFn3/2HczzInABSbWTRZWAcOL6YUTX98gsTNh1Ul8E6ebmBk9PT8XnucbWzNSymmIp/tViYSfEQu8ZzndgOPTdAIKki5ZlAYPRdwO6vlPKD+D8IEIprSEBCCXboZJHq+kkkAvI2fzRRavfF8xzxHYrMAsvlRlry5hLYEZF6yBX/+t6rhlqUhlIxCpA60o0ZKUo6fMXlWOajVHwOtJgxNnFWom29z5mPlWK1p+35uNbvm8ApjmrAAozALBGI4QQdCsZPd/lcsHxeCytN6Py+bdb2YnksD9ontR46TZgKLv5Bj/A6h8rE0K+dT6fcblcStPkNulv38uNybS0VHUDCEwOcE7SRfBq2gSLm6YTYlxgEAFRRgiA8wxh587IeQawwBHKtj81RcaImRGz5FoFevG4e3aPzWaLV69fiSA5J1CGZkE8ETo46XZpvhih1DWguEGNL55Z9kVqp0w/K64S6nsy31xKIk2bAqrdliTM2/WKzR8VLBKdX4TRHNqVaeS2dOwqslGPwGr7Sq2nqtxVcUqnvokeG0LAsJFGKl999RUOBynoPZ3OIBIhZGYE1YTXjX+NuWpcMbvneV5wPl904lLx4Voz+m2mvy1IFsDVwyGDc0TOETFOmJcLpumsEfGCZZlwmZ4AygJELwvm5VLglJxZkt3xDCCVe0pJUnPESavUGX3fYb/b4z/8D34DwQekmFamrQRrGoFmQV81KOK6+0nzKjw31VjJsE6qi83wCfPzbH4dCJ6k3b4pjQRGdkB2qF19PjagK5PXaqnV/Zkl9gDSt+AoVfJlmzqpj7SIiiBgZj8MipZ7Kdl3hBhnZO1Wk3PG4XDA09MJIUx4/vwZHh4eMI4jzqcn2CaqhqdlizhLrCH3Zo30cmYpJ9Oqq8vlolVGlQhgfmAbKFgNp3dOMU81USSCEuMCzhGcGJwSljihyx7e9Zh0DwE4Qt+NcBCWcEoLUpoETOYMRwuksYzc/+V0whJnnM5HpBRxPs74V3/0f4GgjNuYChaac4YDVZIkObBmKqwdVpnXq1cp/iASv4tR5tRaK3xwVNacrnNgMmGtghjE91nn1q6FTrAnjVhY9/5hi2YFjuAPL90EDEDtNQrZPY09NpsD7m5vMQwjLsuM9w/vkXLGJ59+hvfvHwAi9H0nHDTncHNzg2laFONKmCahN1cHPup4qFb1knjuhx5x0d4RKjjTtKy0twi8kAitiupau9v3AGUSN+bB0lYxJ8QpYro8wrsgKaJiKgnDsFUWBiDaXarGyRPAVKPjnHFJp6Lp3j+8xTevv8ai5IObm3t0nUecGJ4cnr98hq9ev8ayLNhtd3h+/wyAaPplMQ2tOx53EkzkmAqV6VoTdpYlIZS9Tm3Sc/21QGit9QDpM0MgoVISF2Ozx3gjYEbfgTU2YzXWBrKyheQf98eMi28T8+zZc3Rdj/P5iOPxiL/+6hcAOfgu4Ob2Bp9++jmCk16vRB1ev36N7ei1P6yQ9V6+fIl377/GwwPwySef4NWrV3Ba/SwTJyYzJcG1s5rCzThgmuQ+eq1If/fuXUHXY4x4enrCdjuu2nDKudYN8YBaLOxcQAiidUGyQx05h8vlgv3trQQ8FErjY/NJC5uV9Z4hzV66Qbt/L4SHhwe8ffsW59OxuBaAw+3tvfzOwOV8wU+Pv8Av//IvIeeML//6S3zxxRfFr5NOiFLyRyQt63W160a4lYdmwhKjtEEw2k8bQUIF6xoybTFUKYlRzX93s/1dGUiDN9aiYpGlhdak1dKwCEYPJt8KFBXMZ1kW3N7e4nA4YJ5nvH//Fo+P7zBNZ91BLcMHiWKmacKbr1/jm69f49nz5/js8+/i/vlL3N/f4XyZsN/LHuPLMgutRid6t9thmSccj6eGBSLVT1JjKdHaze0t+i6g70eM44C+78uOJrvdBtvtviD3IdQ+ILZji5lJg0uqj2d+3oQUF5xPZ5yfztjfHjBspPs2uaBwR4UOCkN4BQPVegPnPDabLW72NzI233xTtE6MC96+eY3Hp0f8+g9/CALw87/6KeKylM5MNn/y7xp4MqAUgO6TtabSk0aJDhD6NlWqVdF4RQquXKtWLojg7253vyurqva7al/i9KOGpGyRZ/slwDY8ELUrJ3/+/Dm89zgej0KlVoapIerEHrIbvPllGSDpqvz+3Xv84ou/xs3NLW5v7/D97/8Knp6OePPmG+z3e9ze3uLNmzdwzuFwOODx8QHHp0cAFfdaloiYhNj44sULbDeb0kzFKqeISO/N0HFB3NuyN6kNrUGFJdmtuFiizSymaZ5wOT6hGwcMmw3GYaM9MkYAwDBUvr6ZqJp0D/ChQz+MpY0Xs/bQyBnbzRan0xmXyxkpLXj7zddw5PGnf/ZnWOYJgQhxiSpQNTXYQhBsbo9GhYCg/+VedJ6dr5uieucKRFGAW2ZYjUURgkbI2pe/u93/rqyedfXRStAg2hbFwa0SLF8QA+207nAcBTh9eHhYtZgUkLLRs5pechCcDETIWbRmhvDT3r17gzdvXmO33+P166/xt//2b+L9+/f4+c9/gc8//xt48+YbTNOE73znu+iHAfM8oQsBUSMuRx7b7R53d8/lvDmV5sAh9LIBa4wYhk1heVg0ahvQy9bT1iutmlvr1gh4xOWCGCdMpzOICIfbW4AY47BD1w0AdI8AqjQh6WYk/pnTnYFJiZQWceeU8PjwWFrc397c4Hh8wu6ww7/+l/8nhk2P4/ERvMwy/w30wSzQhoHEAMpWkqapDI5iZu2zC+3ZYQTPCnkY+i/B6VUPOxNd4prHVIXkb9VctgUO1xgZgVbN8T6QVhU676XlwOVyFgpNA2xW5xk1RVFiYvnXey9bBqaInBk5OaVuM77++mv8+7/2a/jxj3+MEHr89m//J9hstrhcztjvDzgcDpqXlCXX9z36YcQ4bjEMG3RhEH9H6zKdCwidFaQEbLdbPD4+qhkL+uNLFkHoQVLXYH31rQRwnhekZcJymZAzsNlssd3foBtGOHLowohhFKyu7wfllAlOxiQNZUiZIilleCddlRat1np4/yBFzpTx9u07AIzj6YSf//T/w9t33wC5MmRSkw4yANY0EKEgTTVKbP1vM5VqvURIFRVQbr+Bx60gty8u5xKBJUfwdzdiLk3STSjag0v+Ch/aXtJeWnKO1ERj1Vdp4QOwJVBbTZjVxNae+CBfsZqc4X2P169f4wc/+AG+971fxqtXXyHGiB/+8IcAgMfHB9zf3yFn0g2+ZIOGvh9U4HoMg2xlaL0tQif5Sds7oBS0KHXIgoO+78E5SstR9c1MCFOKOD09ImuE67seruuw3W4Rl4i+7zAOW5CTfmyAdOsmLUiOyyxlgNY3TAFfy6kejwJZKNcBr776Ep0nMDu8e/s1Hsy/bQK3MuEqYLaHaPs+NRJR57LpIWfaFiTt7plLXrOVj7U2QwkUHItFYlKczAa43MS1c9b8+WEmwAnD0TVOY6MNeaW1Kqq2GhDVbjlG9OOAlBLmWfaChKZ7xrHHi08+wePDA4L3+M53v4dXr1/hJz/5CQ6HPe7v7/H49ASpiL/Fu3dvEEJfiIkAME1zacFgq3EYBgTl3u92O3z11VcAS/9aoxrlnLDMM4a+R6c9yELwWOYF87xgmU+4nM7YHu4Q1JSmVN2GJUWMWjXf9z2YHChYxEagUCnUrG3hjcNPRHh8fIR3DsfTAzwxTqcnzAvhZz/9aWlQcz13IiBcdpHhRqCcqrZrZdJOuL0t0Xku1gyt49/ITdFsmcEkgBexlOiHFgv62AXl/aS+mLYSV0VFJmSwOvGmSqi95aK91vvyWOK6njBimYGuG9EFj2WRSGoct9jtdgCkUPabb77B669f4/nze9DtAT/7+c9we7vH+/fvsN/f4pe/9yu4vb3Fw8MDcmZsNmPpvTHPF+x2O0yXC+YpYxxHbMaxMDFevHiBd9+8LR3AnXNwTEDM6Dah0L5TSjg+PuFyesTx/XskIgw5wvktuqEvnLigHblzivBezKdXDI5DJSfID5cKrr7vcTmfkeMCoozz6YTpeML5fBZe2c0O4zjgzdv3FUpgmYGiW4zsQVQ3sWV12htFUQpcCpuiTJGaylqVtI5R168SPGRB+rNQcOEPu/F3vy1H2fpRBAh8QfWzahahuNm1/2VmtNZB2rnqNdrftUdaFt56TAtSygofiAN7OZ/w+PgOD49vcZkuWJYF3/veL4m/0w04Xy44nU64ubnD3d0dmKW/xzhucDgcAAjKv91uxeHX9u2mgb33iIsQI512CUIWIuRm3OrnImTLdMbTu7c4PR2xORzQDSO22x2cC9jttzAAehgGhK5H1w8qfIqGs6XYjDJuboRM6nQ+43R8BIgQ5wVvv3mL4+lJ22wlvP36FY7HR0B3lGHmZlOPOk9mKhmAbwDkNqMBQBWFzXULrTSTVGxpmTU50jUWSv1izqIBPzCX13nHD/OQKMLVKiytM/rosbXesUGOG4HMmsMEaW8G4kJKDJ1TQU04Hh8RZ+my44LHOIxgZvziF7/A559/B8OwVSIi4auvvsTd3R0Oh5uSjzSn3ftQNNf5fC69a5kZXeiwHUe8fnzEsESkGMHBYdwMSDlhmhRwvUw4PT3izTffaKRlNO1Om/QBISgDl4FRgwdyTgiQKrgSUFiqS1nnIByfnvDmm6+Rk9RVztOEuMyYLhfkvofXtltOAc6cU3VFqve9mjNSv0o6kiu151vy1UV+PoKdrgROXR1OXHw9+7ZX31JKGr4FumgF68P3U0F4bYOAlS9/da5vy41a5AJyYAj6zSQ0mGVZMAySSzwenzBNE6Z5klrJrgcUUT6fLvji51/g8fEB2+0e4yjFwO/fP0gvjf0ejhzO5wne99hsdri5vUPX94gpYZ6WgoY7J/uhbzcb6f8xTcrrkwwCmBHnC85Pj3j36jWOTycE3b9J+GKyglOccJkuSJnRDaNwx7yHJ9UmQInWBRLZlGBlWRY8PDzgdHzCNF3w+PCA8/mIfuhwe3OAd8oMQU2d2XxbbziTASMxgrnuyKxJchOKVrNRnRxYE+MSbV5/JrgXYJT3bP4fo5X40Dri9nJF33C1Z8oIbWLOqpWMZt3CHrRuwtICj62WI9LmvoqT2Xcla5UxzxP6occyZ7x//4RxHOBzxvl4KoPQdUm1gPhgz5+9EBD4dMTbt2+x2Wyw2W0xXRjYRAMAACAASURBVC4oG7eGgMPhgMtlQlwV8wZsthvE+YA337zG5fSEzThimaUvRF5mnC8nPLx9g3dv3oDh0fUDuq6XHeugfiY5jJstNts9hn5Qs6T4l42Rc3CsewfA/LIFaVmwGUfE7Rbv378vhAKQUpFYC7tZuV0aHBUyQLP8GY2P1rzaiHL9gQpR6Vq8Ljppj9XZhyhmLgwduzJJT1LdLEK/fG0eqWBYH8aGBOl10VKJr7VVG0xYaC6pCl/OVo5TX8HaJdm1pumC8+UJtzfPMQxeOyGmwngNIWAcc9lmhpnx+PiIYeiVuDjgdD5jS2K6LpezbqMTVNBuC8Mh5yTtEzrBzY4PHZ7ev8fQD+jCVFb9Mk94enjA+XTC7fPP4EIPBimo24sPuNlisztgs9kJ6OoJHLk8t9RpxtU4pZRwuVyQkyT/j8cT5nlG13d4fDzhyy+/xKBa03mUBoHWcrNObitEXAMDVN+zgGfG5bcx/wiyYNmD1haywhnm4ki9wjo1VdpfiEpdS3WVRhR6bRHs2nAfNTAkLW9fMxY+uFtYiVUd2MwMjgui9aJXsxUzMC8iTM4TLpczDocbEHV4eHgsLAnZFifCeunnzFiWGXFZcLhx8JsNuhBwOYtwySCydgZCCQxiipLO0Q6PNGTc3Nzh8eEBp8cnBN/p5BIulxNOxyeAgXG7xW5/UC0mK3oYR+wPtwjdRuGRiJRzSUY776/GqI22xUIsy4J5nqSvyDLjcr6As+R3N5sNOGfM86y+5GW1yLMKV3HdCYahoiqJxkyCm62eGmFqcTRu5KMIIn8ok00QZf8Gs9lESpVlgB2tJF5sNRVzVhx/mOBLSPOtDmR5iWSnXBO4bR7UFpNUMdamcd57pLjg4eE9AJaO0OOI4/EowGgnLZus96xp5fPppJx/MWeyJU3Gu7cPSCljt9tjWWYMvbA+mDM4ZilS6Trs9gfc3z/H5XxGill3skt4eP8el9MJN7fPcHt3i8PNDTYHiSqHccTh9g59LzlKzhGcZ+TM6Met7lQs+F8hHTT7VHnvcdKO313f43x6QpoXbDcbTJctlkUYvMu8aL3DiCd6Wk2+gbBOhaf4XwUjs+9qNOmoNrMoNcIqld+OWKBVba1glU/1mWRrH3PAnROMY/VFO9laWrFahR8PGGr0KL5OXR65nIP4Y6V0DsxRHFmy0jVhORARNhs5/2effYZpWtCFXqnbWvy7LBIxdh3gCDFlDIMrxSVm2t69eyvbTjOw324hIYwFNBk3fY+UIv76Zz8FUsKyJCyXC9K8YLfd4/75C2wON3ChwzBssNvtBMLwQTRJToXLXFoneF/6skI3tyAXVj1WrUp+vlwwT7IRxPH4iGm66BaFsrNeSgmn8wmWQpLuPS2o3sAKjda69pGLWInGWWk8RnVlrrVWUQp2jvK9KnhiLhthMA21NnltUFpvuNwaMYQRK0vm2qmvEi59tYr6BqTB2gcQCZV/QeLLSKX4IhgTEZ6ennBW87fbHXC4uVGz1+Ozzz/HZZImfpvNBptxRPBek9wD5mWBCx6sfo8g6+LU932HtETs9htpn5AZ/dhjv93h8viAaZkxn89gIuwOdxj3B/TjCOc9NpsdtrsbDP0oUSgyEjJ8FzTzoJVHQuQv40zEpfLbWnfJziUJ8zSLWZym4oeacMQ54u7uFm/evpLNOJpqMzQTfQ0rtfNSP7gSngKGXn2X1n53y/m341pGsb3CCphrfK+aZFVHEusGHVbQK8G9L8LxMZ+sOvK1Jx998J32Exsg2SHEOhXKKg+lt6xRffphwG/8xm9KS08VqNNJnGbvHPrdTgHRDqQFuDGKtksp4fh0xOl0xN3NrWi2mOGD7Iw7jjvc3j3H49u3eP/mLZgZh2fPsdlusRm3GIcNbm5vG0eeYWVyFsVaQxZQU7CDjMiyQSxUe/kQMC8LLtMFzjt0Qfb1bNtNzfMs/Lkl4hdffLESMEAS0wlqlRiyHQ7MfFbXZJ2K0oQ2Sb2SkT9bX5FJG01cuUQlnWRRKK+VSxGyazt6XYMpDrIMhpU7WSQi3fnwwWpoz/eBr9b4eoABkKbUuFRXmyEVOo3DMI5gRmFFbDYbjOOIt2/e4E/+5N/g/v4Znj1/gc1mi77rpJo7yF7eMWc8vnsHZsb9/T3GzYicMtIiG4TFeSnpk5xZ0ki+wxICdre3uHv+AvPxCegCdocbbA57RN2CL3SdBkTaToAIXd9LwxbloVnKhyATb8lmWKUVhHXBWgaYYsS8SG40KQO3FY6n0xHsMvreYZ4SUmwCMX1lw69MWaj/nJsArQhDgSFaa6IChuqft/ewNo3Q7bWpLKamdZRNtNOaOl4JRjV1GkGi+lAfN68fvlYarvy/lUqbIIs8paOOlK9JC9AUxVxst9uyEG5vb/Hll69wOOzw5u03SAkYxy1ubm4xaDt2R4RpmmRPJ23vdLlccHNzh6H3mBSpzxvRymJWR0H6M2PoByycsb3Z4/nLl5gzY9jvEboRz+6fY9xu4IMkxUMIGDf9mvVK1YTIaqojaC3MM0To5un/r+zNenVL0juvX8Qa32lPZ8p0le02nS64cKsNtGU3FgIkREstcYXgAyDRXHKHEHf+BH0D4o4b+gsg07gFTXULWYBayCqpJzurypWZdTJPnmkP77imiODiiYgVa+13n2pWaufZ+33XECviiWd+/k/H9u6B5nDkl998BRjKqhSC82k/19fX9H1P2xzJnKXte0LTL2lvLRwrEPE5IJXUzTDqalMCmmoyaqLfPVpbPBaaziYMKxz5/OT5YYxJ0kQSjpf+HWX2YwPhLBdjSpDOBYMgiMxgfYXdZpJOHJIbb4zlzZu3rJbrWBjS911Miy7LmsvLC19tnVFWC5xzPqt0wPQtzcGQ5YrSux9QAsacKYk3aiRp8LjTLFcbTvUOrGNzsaG+WLParFksF6xWS+lc54H6qqJEZWNhTnizUuuY9pw2KnNGwGSkQUbGYAeWqwXGdBwOe3b7PcuyIluvORwOrNYLTqdj7EkV1k6YgOTyj41l07lnoqifk2LnfpeVdskyj7lk0YCDWNmUujEAcu0jQs66R7V4gM/eVF6pZyTnWeOe+Y5IBzPfNfiJP8/5ZKdrrSl0Rms6yWD1le7WSieSH/7wN3j37h0oR98bfvSvfcHLl5/zwx/+kBcvXuJ8ZdJgpCQs89DnNzfPGAbD/e17Pt6+ZbO5gqTuUynBEBsRtMVvtlytOC2XdMagc2nVkxeFbxNURk5WFAUOqTay1pJriYTkfoPkWkdL0jqpa0QJV+v7jvuHW47HA0VVUZiCTBfs9we2D/e4TLFer+n6XohyGHwlVAx+ECu17BhLTu33YC3OIQjmR6o2zZd7GsOWI4SuQnFySmh59IeoKeeZcqFQnW2T8yKOMqHaPu2fGAb5iMBScpqJZsAnERKhDJTKefn8BR8/fvRVQTl/8Ad/wHffvuN3f/evc319xTBYvvnla06nryTPfjBsLi6FGyV1k8ZblHW9oFosePbspcQuffWOzjPysqQuSwrfzMKUBRlw2m3JMs2qrtlcXXNxc8NqvY5iOGTQWmulX4DOyHUGHh0xJELawUSrDW9sGWPpO9HD+rbjdNhz+/GjiErTczjsWGUllOIGMdbSNEfKouRopKKe1FtqR9AV0aG9mLaPiWO+HoH4JlzwzPrNC1VSuglHSE/P5yc/JgJkgMHiiGatjoO3aoQnEkIZ+w89RWDTz9Jyee3hRxUGiTN++PDOE3fJfn/kH/7DH7NcLrm9+0CWKYqi5NVnn/Pq1WcY4zjsd6wvLlmt1pRlQdM0UR9zzvkM2SVtfcJpJ8Ho45FSZ1S6BONwOijEmtK3AmyXK4p6wc3z5yw3F+RFQV0vYkldSMdGCzcUi3z0fYleliDm+Dl2dkTuaU4NXduSaU1zPCIOZMNAT60KMRqM4XRqafuOXOcYF/qCTudXQeyxeW5t5wSSisG56JzTxzxZ8ZGO598HQKdUe84SVKSWoIJAXMqnrITU6zOyPbVQ0pebvuT0ZUPSnrOW0ne8zXJHlkm/oT/8w7/JZrMi8+Gd+/v7WC52f3/H9c0NVzc3XF5eij/MjRCeFxcXaK05nU589/23HPsTTom1ul6vOTYN2+29RBl8TySVa7QCMxhy72Kxvp4zizUCIziK1prSuy10lpElKkLqQ0o3oRkMh/2et2/fMgw9i8WC5Uq6rzSHI9pHSA67Pc5ZTu0B5STc1tthann62G/0dSV4I/M1CFZlkIVxbUB6TgW/mH2sr8+Nw3RN5+udi06gAjlNBqCUio7CAHHpxJ8xmrUxAHEeyecp1jxR9JNBhoJXrXIUUOUF7WAZBsswtHz55ZcAscgjtk7Mc9qm4fbje+E6bcNiWUvPJSftdLqu4+JiTdu2PHv2bNQZ8pIsK1guxcemihLjIFeKHMXgwDUd9thQ+nTt3BNWURSx74D21qwjVJiPLpgsWZBAbMHBut/v+f77N+x3DxyOO5rTSWARnMNYwewoi4KLiwtUnnF3dyfwDVYiBsZ65Tuuj8ywVWA8xwx95Z1z+BqPGBbM3GOGYHGTZl4qaFVKPTp3Qi9naCCPH7jHOllKjeEn7lil4iDP0dGc9U5dGFMCnLhRvP6nfAMr0anwhF6w2+1YrVYR4Uc62hpevHhBVQoBBFA9MwzoRY32pW1lWVJVBXVd03uId0HgGWICZN/3FLnGdA6nMzCWw3bL9v1bOBzIVmsKB4VP2VFuzAsLPin8wtqAXasVQ99H5Tms3dAPHI9H3r9/x/v37zgeDz4w3rHf79FWesIHp7H058zZb3eiR6bZF951YZy0pkaBco5cJdIoegOShfDjDR878HBRTzOIc6JxLmZT+gl965hzshHekrgzzpbD4fOGmA7qnI434XDxtcZ7OxeUT+dLxRR5UVHa0vvFLri7u4sYrCsPvb5er2iO0mOgLEva04n3797yKvuc5aJmdbEi1BcEcZyXBUM/YJRglEkrRMtiUeOsoR96MqVo7h94+9Of0334SK2A0wnbdth+IC8LcTzm4nS1IAgODt8QQ/kExgFjXKSu0AvUDIa7+3t5p+bEYXfPw919zOI1w8But6UoCxSOsq5ojG++oRSD6Udry1p6z2msCuVuoD1wxcRfpqa/Kq1wxiVR6LG+EiX1Acq5uEmeUn/Cv2lIyTlHHqLykZjCQIPXn2AUT8lCB9tZ7nTWJfG0qPSXJUaRPM9jrwZvsXMonyUb8+U9t3LW0p4alstltOyur68pfBnbfrdjsVhwsbmiH6Cuc99lxaMPaU1V1oQmB8MgsO1d00hP877l+PCR73/xNd/85Cd88dmvoxcVtmno7u4oS+k5pWrjLUopGraDla54ynn4+VZqSK24Q3pjpAMwjqbpuL/9yMPdB+7ubrm7vaNrW9FFfU+jxXKBRkWYBG0MJx+bdV6UKKVQTlE4FcFdrPJoh747n/J6VQRLUS627fZTP6o8fqMov7ZPKjyJrhbpIqnfDTQQnbEST5vKWLGAxM80fuW8pfdJ+nnC4pCAcDooYeJjLFPr0C88yHkxhbvOt4EeBjItNY3VqqI9nsA4qucBhl2ILNPCCZrTnrzM0CzIsoLGDDhr0XlOrivQCmOFIxhrMV3L/v6B1z/7c777+c/45l/+OVdlQffsFawWHLb3/Pk/+wmf/9Zv8a///u+zXn/h1QUl/Q36AYWjGzoO2w90hwN9e2LoWtr+xLHt6TsweYF1mcRY25Zh6NBKewirls16zWl/jKB2ocNJ6Gh89HMUw2BBSQ/uKBi5GW4Ui8F9khhcIfXcr7y4Vmbr+ZRPMyxkKAJOXR+B2CKRSW3dWLFCGGSkKJU4a30fxJmiN1cK3YTTBVYZ7i6/T4PlAelwfGHnBJs/t+IiCJ2Gy7Kk6zrZ9Ywm86k5CkexFodB4WiPDR+GlmOz49Q2KKe5efaCerWiqteU1YJM5VjlaE4Nr7/+BW++/gqz2/JsWVHrgsV6Q1Ev6bqGm3XNzbKkcAZlvb7jdaauazgdHth/+I6vfvrPefv65wxdR6nFD9cNhu19z15VXL38jKpacmxa0bH8a5dFQds0NK0Uuzy7vqEoC6qy4nZ7R0jsVEpJFoyfVitMC+20XypR3qUWUs4LQMIZTPLawpqFloapz+wphT6sYkqIqRsr/J10JJheEW/qd8aoFap48lxEprI6wBr5p8ddNz7usSHg6dFbsp5YrSHLxXrsu4GiyH0mhnjwVZbhkGzYh4eeqpGyuPVyydCvcMby7Vdf8v3rn3F/+5E6L8mLmvXNcy6ubsgXG158/htcbS558fnnODLWV1fkyyXqeKS8vuFqfclivSErS26uLtlcrqmvLikuNlAW0tXXCYG9f/stb776kq/+2T+hKhSHu/cYNLbQoDKyIsPSsb3b0fSW569eMRjHbrtDKQQxsqromobr62vJji0KirxAQXQeC7wm3sEqs6gT5V5Eo5suGaPHQgjAT7pYVnJlUiAUCOdTDvWUbp4KUSXW5RPyz4k+NIrIVPzJS+mzrBRv4Xj0+MCqHw1i5HwqIcYoLhHlucgrwYhA0o5TjIosz2makw8LwWJZo5SjbY58ePNL7r7/Baf9vSzIckG3u+f97TuGAXRVU29e8vz5c/7w3/sPuLx6QbVYcvXiJae2ZXl5yavnr6g3S8o8p6wqLl7esLy8pt5comKPT8f+7o7/+x/9mPtv/4JV3tPj2D7cs3U5L68vWOYa6zS950RSFOIwfUcWikS04nQ80DYNRR/CVFLy1rYNQ98LIUS/WyAiyYgJmrPoack6IPqZTop3A9/IRCny9/CEOGvIla7RxGsAk9K7sLYp55u0vXnCAIkefO/pmBGhGndL8pDRZeEf6AloSswjoU10AiS8lOdhJhy51gxeHJSFpDZvNpuIsBMcocvFAuUcx/2e+7tb7t/+kuH0gOl7UBn7U0OeaapyiR0M7z9+oFcfePtmTVbk/Lv//t+iWix59vIV275noTOWVy+oNitKpSiLnGq1obq4olhtkmb3ho/v3vJPf/JnPKsadKE5tSfA8ZffvGO1XJJZi3KKbHGJbnbUVU17bNnuH6ICZYaeoe0EKVtrbGYpioyu7yT12hqs95HNJcPcMy+czbua/H94MZhiwdqg6CeGX7DKIgdL3R6MTEL4znlPQrh//ujbuOwpWw0P0zEhb/pC8t0505ZI0fHps6fM2bDzInPMRyJ0XysqyZQ1+OokAcALfipJ8jOcjh2H3Z43b14zHB/A9FjTg87YnlqeX12Q5QNt39O0LftW0K9/9uVf8G//3t9kWT9jc3mNdlBnOav1FYvlgiLPqIqccrmgvLgkq2uUIjYsNdZgzMB2t6dVlrY5Ua4qPm73FMsL8rqmMxl1dcnpzS2XVxX7/c6rBNLwwjqL04hhUo6Ae23TgBO9VKvpRo6LHtZJxw9jvv+o47sggoQBMNKPcqPREKIxcU1mrqrJ+s4419ytMRWXgYDTgcNYocSY7hEGoVRy/ZnDYWM9XuBqU3Y6Eu04ZpU8Q9oWt23L1WpDUeQcT0dBmvaWIkqyHJxzDH3H6XBgt9tyOh0ZmhbTd4BFF3Boe1a9Iy8Np94wDNB0bayLPOweuLm6pqqXVK8WVGXO5fqCuhKIzyLPycuCoq7QRRGbIjgcg3PcPH/Jm5+9IaOnOx3RTcvQD1xefY7OV2AVXSOo24fDnsP+QFmXaJ+eXtcLqrLEOseiqqXiXXt91DeISGHgx4UNKrOQTnAJCYcb1zVwtmjgxTK2QGzeGIg463IVmpgMPZV8KjUyx3VP1jQK3pRuxxMt3jyJBubIrbxnPgEcPhdITTx8BJ/LSMf2EdU/PuQNRqTGnLIsKDPpgdQ2jVRma03Xtmwfttze3nI8nei6jlPT0nQDx7anawesBWM11y//Kn/1R7/DyVf9hPEfDwf6XtrLlFVFsVigqoJ8WVOuFyI2lwtUnkVQXwm8Gy6ur/mtL36b3Wng9qFh1zje3+7ZLNcYu6AbNDrP+Bf/4p9LZfhux2B6SSfKcuqyRuc5WVmyXC0pFzXlopZMhqwg05mU3qm5g4EJR5p6B8YlkJ9QKebAWd9dzsYTx/WwnoEkBPMkIxmjOqlXIfw+0ckeXaxAjfTtjYCUy4SM/PPFI/N7OxUyD8K0pOVwakLA49SJ/y4tpLi4uIBemmEFDNmmOXI8njjs9zzcP3BqTnTdUeAtLTgn7atzrSmqFVcvfoOvv/qXDL4Bl1QBCdpi057IckXet1i3EqyyqqIMfjUkqE0AkvESaL1Z84Pf/E1WF9e8+e6AHQaavqOocv7ff/J/cXF1zXZ3z4d3b3nx4jnG9CyXK6q6pioFsM9lCucMmfK6sBKITmsMPY6ri0vpw0Q7yQmLxOXG+SRwncDFZht6HvKLn3mwlehUDT/ApHhEefGaOGXPMZpHOf7pMQYX3LjcbmZlBMvlE97Z1B0SPGaBiFLxKIPUj69VjqIqYk7YsqjReeEhB3Lvn+o4Hg/s9jt2+x1t12LtIP4/J4HiwTiyQnBZnev4xS9+gbMpFlnO8bTncNhirbSRdk6gp6y1WGPpkHQeB2CtVE8jISJjHZurK/7N3/t3ePcnf8zDfsdgLE174KuvfkqVS4/Oq6tLtEggiVhUNeViSb1Y4pwR67LtMNaPvR9oTidQisVqSV5WcNqPk2SdePbjShHHKPaEiwq9bxcRvaHzdUuV/LjGEIt/z+ndKTz/OTdHhI6aU3QgiuQPEY2JwyWYsFEvmw+WVEENiY+ph8w9Oi+8WIh/iYsiA1VEULk8LyiritVy5Yt2HW0res7Dwz1tJ5gXkufu6xKCra8y2q7lm69/QXs80htDXS/ZXF5y/eyGV69eje4TpSgrzWK5IityjDUS4/SIPDjJzbdGiLkfGozrqcslD9t7tvs9u8MRO0jYqm97bm5uKOsKnedcXV2xXK1QOqOsakDRtCdOxwPHw5Guk35Kdhjo2obe9KAVD4c9p/1WjATnXUhq9IuJ33yiXHsCJH6vZmsUFzVeor0hQTQS1BMO2XPrnh4T6zKlxDGsNPKywL4nro6E4pk9IFB3QDYcX2M62OnA5GmRVTsFThpjBeJru45FvYhJgsPQczgc6DoJzWjlkQt9y2iHKMYWx9D3NO39BIF6sV5Q+CxaEb0niiJnuZTuKCISBvrW0Fmpz6SqIv6G9UaItZY8K7h4fsVf/xt/gzfff49SOafjMaYjkUke3mK5JMsLcMSiE+csQ9czdD12MFFfE2IvaQ8HHu7u2axW3GstWLH4KQpiUYW1cnFzpzOrAWeJmCOTyAwuKEbEcFW6nv7fR7UDTK3dR66UlBjm0fN04RUjmJrczDfkenTu9MG5N83l0qAfTF9+dCw6nJM6S2nLF86T9jjGGJq24XQ88u7dO7KiIFMZRV7FFyyKQpL+PAZGABpGZbFeU6kxmbEoClbLjU8ZGhiGjiyTFoNaCSS8wvrGKwJUZ8xAN3T0wwDGd4/z6UhlKbj9Lz//NX733/o9lssl5CNi9sXlBZuLCxGTZYFzoogPduB4OtCcBBo0NCoz1sYClaIoMG3HerkYlQ7fv8jqqSRK8igeOUqzbIwznuM8aQzyXOb0nKDOZeekBDy6MM4Qixq19DhglcQvo+3qH2rPKPwBEnPU40ZuNuec4fcMcE7gBFACjtIPrZTzH0xMFvzyyy9R2lEvqli+b62hyHP6oZfmWn0vldiZJsvEaSsWoUzgarWiqguqsuDyUpTq1WrDZnPBcn3Bol5RVQt0kWOHHtf3GOvIbQ7OYLzrQQizBJWByuj6nl/74Q/44kc/4rPdZ+RagJaVUtS1QBo458jKjLzIGdqO7nhkaAXmvTc9RZFTlxVt30XO2w49l3lNWVUYO/hGrYmbIir5BkXA3ZiQ0CMjMV2zKWd7QmcjKPvT6+bFKZPY5VPBT6/d+oGHfth4fSwMZprZel4EPn2kFo7zD7BSCeHrFJVPw5FMU+erk6QgpCYvffWPESW6riq6tqMfOvI8k8yMvIgoOYOxZLmmLHNWq3XkMPv9js9/7Qes12vBtFivqJfLCE6sM4E2CKVhEu5CeiIhWR86K+J7V/UClRU8f/EqjknmRfSdYRioFzVVXcu4OmnqKmVyVuo5M82paWjbVoD8tJaEy75js7mgaY7e+iNo+H4+z0kWn+WctBdMCeT/13oF8lJT+o26qjfwHjtjOUMkDpQSgLbw/Xzgkwck7ou5eZyGmeS64JtKXlRrSfsNRO/1vNDjUfn43tBL5Y9kLuRgQZfyLBGVS5yzNL4V4PF45Oj1oiDe63rpRYeAm2w2G3GPXF2yubhitZKayrquKYoctKZXCrTg2o7IkFb6JumMoqjiGOt6xdXNMz6+f8sGadwqRcaZj1BI5XjXtrSNYJ9prSXbohbxfzqdUL4KPaAbZVnG6dSwWl3y7t2bZO2IndqCeiMAeTZahUoprEl0pYTzpFXlT3kcxs+DZTqSQPzOeR9bNsLW5yQcZE6xstBIOpmz0jwUJv4ylMch9bLbJlwxWJ9hYeUaF38CgcV7Od9VNtEDskwqyPO8RFmHGwRV0RjDYlFTVxXKCcla71Iwg2C9VtWCqlzgHBG//9QELC9fwFuMXeaur6+Fs1UVq81FrCYvcmlaX+TSogetsUrgMTWC25rlBQEfTSlBD8p0QVEtIlz6crn0fcSFk+VFgfE6nrNORH5dg5Kw2cXVJaYf2O+k3eJ2uyXPc/YPD1z7lkIxhjmz8FPfpdc6HonJqHdlCu3xXefEFfSzVDcLKk9IOp2cHwjOSphNa00eAGoVko+Uzah3EmaC2HRTCGn0M9sZ55rL5jjAcJ+gt87keJZpD1ziLS7rdQqd49Tgi2F9gwrlGAbn/VkCP3A6nVgul9w8f+Yry1uKU0me5RwOB8q8IisylptNxD8zFnRWsNlcsllfslpvyFVGmRViyuuMDCjz53l7wQAAIABJREFUQtB+nOB8WW1RWUbhe4fj58ci3PfofVl5VlAuar59/YbNZoN1jmW1oD2esE5qLkPfdGsceVWiy1Kq4ruO9cUldjDU1YLtfkdeFrHxxHyO07l3sR3J4yNVzK0JEuX8eROnb8IQzqpZSvhoyhnzwGVCC+GQPzYONHU1jNQclfiZ8zQd/NTJmrhE1LgF0oE653zrGnF5hE5zee4tmKwkd6GeUDGiCSnv1ReCK8uS0+kUA+fPX7ygPTWoTMeMja43bC4v0VrzzetfioGSZdS19EJaLBYUZSkdSLSEsIwZpH+UcmIVKk2eFRD7hWsvtyzN8ch+v4/Pe//+jViaKIq6Fl2ulWKSFLu/qhcs12ssju2wZX15QaYc94c70GJhCiRpEUW9c85XlT1WfZ7St6OijuMRBJRSE+711D2fcsIaZyff57gRFVkx6kfzgSnGbIPg01HRIJi+4LmXii9HqslNj+CykEEHApRcKqUyikIQpofBsqxFDFk7oJRD5QoGwZYIxbyC2y/+MV3kXK+W0i3DKUrvcmjbFuVguV6R5wncQFVRFYXgS1iLynMPGep7USpJQsxVJn2RUJKRiuF0OPDhwzse7m65vrmk8QmItx/vpa6Smq4bMB53Lcu075xiY/isWtQ8f/4CZy0fPn7k1ec/oDmeyPMtt7cf6bqOYRir+IMMC3pRIPhzHC79W5DLg/ya6tOTyM4jo+4x25tbmBHVZ/JARawhHIlLRasuqO7yIDxBTCuJ5w8NfPhcOUI6qJGww3fhRTxKo8DGAErMf2NZrzeAELu1BquUoP8MI1ic1pq8KLh5tqFtpJtJWQrM+nq95uc//5k4MwdDXVcitvJM+gkpyDOpotJZRuaEyzonuK9RT8GJnuYMtu85Hk98eP+eTDnubu99H6aSly9fev3Pcv9wJx3lnCMri8Rnt6Jc1BR1BUh2yW9dXrHb7rh8vuJweJCZtM6rrh4O1LfGdgGWKi6Fm+hT6dwrJRvDBoh8v7zKK1xzhhOiC7Em/ZxK5C3MlBLy1H8fW/oyFvY6Z6V6RQVROorO0dIYfS9nOVjQ1ZRGYRgLgRUuKn1T5+94r8cW7DAMLBYeCHggpr9Y6wTa0/uP2rZjtVqz2VyKTja03Nw8RyklZXDDwGq1xNiOspQGErkvb9NKyuek9jPzSX4+8yRTqDyj8NCczovIIO6lS55Yg0PbMpiBqq6iCNoftr5MTlKVClt4rIyCopTwWblYUJU1CrEyF4sVg7Uc90cPU3DyhC4IR87ZEAAIU+43+MhRUm6kUWCsz4iN7k7PE9RkvsNmVSpZ/4Ro09y/tBW2iuLSk905mY0SBD6HpPpor4elOltKHOdcIZPzwps8ITSjv8yOkYTUAnVO+TaF4gWv6xqXZThjqarST/RA73H5Q3PV3XbrcSuqGOsUF8GBsqxEr7GK6+sb4Xw+rSZ0oRE4euXLy4iuDxWrxIXItV8YB2JFOst2u+X6+tqLMem00pxO4MNFfouitNRjtn1PYSwlkqCIEzVltVrG/qHKeagp5yMReFcFespd3WPHa0qFYRXiPo9LMg0pzUXndF29RGJa7ebCekPM4HlSFwsma/g5M9on3mJOuMFtkTGl56nzMFig6fVKST78crmMVlXwfwWOkfrbAjaFlNf1HhLUxy8TAlVKMVjLan3J5vLKg6eU/p00QTyPKeSjdR2yhjMtYTPj+5y3fUffdRwPR+7u7iOcZ5aJx785nQTJZzBSmheIwoY+7tJnqe86mqalHwYRn0VO25xEF9aawVfAj2G68f3nBDb3f0UHbqYEdUiNgDCfwkVJ12jyuwdPDJAWkeNpD5sVrUcXFjwRVXGUMtUxOS15gFJTCv7VhDYPrsakk1FP0KNV69wIZeScpFsHQsmyjDIv6LqWLCsk7WfQOCuhlqIssdYx9ANDPrBab1j4Rl6996NdXIjP7dWrz6gqARkehkHgCooc64SDZglHj1htns05JyLc9J1HCDrgzEBZL7i5usRaK4DDnnv1fU/fCWhf7v10fS/PVM0p4vxLM9icIstoTtLyxxnDqW1Y5LlYurR+A+iR65OypunaZJmOXCp4MZUWnXYknKlUCrcI6zy3Ji3SMzTUe4y+Mq/4Kzw8AL4IdEYSQQ1TSMcvZTX4pqg6dV+4x6QWZPnsw/MkmRC1vIRwEeFGMtjj8YDWGZeXl2PQOMvFI48QoxRbSDfcoiglqK+170ISxOSJi6tL8iIXzAmdcXlxjXHSEXcYBvqupapr6sWSIpdAtgvjRGFdQEk3dE3LyfdB+v77N+y29yyWNb0v22vblvv7e5/BmzFYS9O19G1LboI3Pydv2iiOcaGOoaRrWprmRNe2klqEo+9ajBmt+nNKfdCV030exLNW05rKiGjrRq4NoiYEl0RM0grcypfjOZy05skyjDUxTTs8dlZIMqXUeDOXdtcJIjWR9yqw4PN+mblpO6U5L4aYsXNSbjbqZql7QmuNygVSyjkwRvxkg8nJs4xlvaCuKgY7UFYFh+MJa6SnZd91DMNAURR89tln4qsyA8fdFlNVNFpTVRVFWXrdR4g+05nMhzc2zGDY77Yc9zuatuX+/pb2JIS6qGvatvGcVwyLvhVcscEa+mEQ562xDF2HKXJsprF5gaksxlppj22ddAg+Hr2eJvFbrRXDzMKb6ExKalgneo6b/APMQoLB6PMKqQ26KL4uIL2XaP++nXUoNgnPG5lJPlnURHk7G7sKBBb/8j6ZWCowdUmcUxTHjo4TSpvoFlFsh+u1Bie9ubVSNE1D5qHU21NDUYRuuZIZUngHap7nLFaSbaq0ou8Ni8UiwpQvlkuf0p2R5wX73Y7Oh4CsNTx7/oLr568kZ0yNprkDqUxqLUPXcn93R9OcOB6lOMQO0kJHedSgtm3JskyI8XDAOsNgZMdL+EnT9y1dN7aQxjnv7pAGEf3QowgNvBTGGrKsGOdMpYW8yttZdmJvRcnkVaOg+Y8MIRGRCk8oclJqTYb11cEsJYhH76hKfWskAXJhmdLRQk9JYLyxynz5uwKnUWrw33oUwVmAbO5jic/RiiwdnH/haeUynigF91WmMyPPcuEuHhag8EB3YiGK78gOYBgoi4Le9AI1VVRYqwhhkrZtWdYluZPelA7D7e1HmVCluLhY45xYiWVf43VpjJXuwYICJG0Jj8eDIFS3HW3Txo1iuo7jYYsxA6dTw8PDPYfDDq0zBp+5myl88yuH8cAtxhhMP9B1LfUik9BT29A3TSzoLYqCQXmDRo2iLDzb+6N8LBUBx2PUvUJE0PrMjMC9wvIF6ZP5TSXK/ZQeLGMFWvJF4jtQI5HNyAnOEJko/BJOEU4TKlymTevPhZPmv0flcHJ+2CVyRtgF2jsZwzlSOV5EKy/PC5QX17I4HZ1pWS6X5IVMUVUt6LoG5wx9P7BaLSiKjLY5YbtBHKk2KNagdc7zFzdYaySdRiuqaikcxPWYQZqc9l1H355om5amldbNpu942G/puxbtHB9v31EUJU3TYEwffWh5XtD1Q/Aa4nyYqG0bdJ6j0BRFjnOGrrPs9wdpcpbnDMZQLxaYZojoR37WHhFL1EXma+OIoMjJxMd5njvk01qOqE6hJtelngEcETXokU6WFiKknweymHwe6TE8+HHxZ6rfhahC9P6nFuzsBcc0msSBqMUyK+vKZ2Es/LkiFoZh4Hg4kOca56SUTKHo+5ZTs+X27o7nz19we/eRqytpqtq1Ldv9lt3unsViyXK5pioXHI8n6vpItX2g71oWyw5rxL1gnaVrWo6HPcfT0ddP7mjbA19//XP2+61EHbDsDlvKYkGRFxwPB8DRtkcGU5CpgsFI92DnJH2pbzuUOuGsIysynJbwldKK9WbNYb/HOUdRlZitB/KzAiwzEpishYs72YtOiGtkgkhUkrsf0rGt8TpyoCq/9lrpGEmIDe9tYATTDI40RAlJtVIUZ1ZjNeiUSPzgXOKCcASFPdQhz8zelESTHRSsjriJVBoHnVwUrd5QpZTirIYWMFYPOGdojgfJO9M6wkc5Z6nrBUPf8O03X6OznNevv6KqJPFP5xrbdrx9+8ZzuBXmqsOuLtjvdxSlpPjUdc397R3ff/+Gqq64uBC3hOl77u8+cDwcuL+/ZX/c8v79G7ruJK1rEIeqXThaEJdFnmPsIBhmmXDiYRCv/TBY+sFQOCdgxdbiBoPKJH37/tTSto3g2TqJ6RoTeaFHDNfR7xj2sMCfK088YT29cTi3Sj1CQFDbwrqFXqLES0b6GIZh6kWwI0dzzo0dScZD/EEq4TbS4Iq4S7R3so1iTuR+GrkPx5yInwRnSYjTWhtjg4ELBujN1WoTuVjww/SdQGWuqhrtu4OE3P48z/jlN1/z7u171qs1h9MRlUnLHJRFO9jfP2CBrjtKxGAYMEo4Tp6XPtPB8u23v6QsS549e0ldS3Ow7f0tD/d3nE4H7h7uOOz29EPDMPTRUYxzHE8Hqqqi78Xo2O124KBciPO378VAKMvSB80zhr7HlUJ4+8Oe4/EoE5Up8lISMNEZDJKxq9Bjy6C5GhIozo1xSqle8MZYYmGm6zH6yZJqdESPdHZMuz7XYjz8/mSOvw3mrIeknLs1VLRm0nJ4dXagE0JidFeMg5kSZXp2+Lvve1bLDafTifXFhqZpKPOCssjoMYClaVv0oDHWsLnYcH1zydt33/H9999y/eyG3XbHZrPGOoNSjv12i8XQ9SIKu1646ul0ZLd/4OOH91hrOJ4O4CRjt+97vnv9NZvNpaA62oGH+zu6rqNrjlJNPvR0doiTH5TtsixjaKiqKu+E7ajyGrxDV5zAhedUjlNzoh0G30JRCppRmmpRe67uCcovuohJO5lPHVwSPgN1DAPJEbwcc33aU2dUoazPssVzRq3URLJEqzRIIGfI0NMucXLypM5lItfD5wK2m4MyOBfUu9F8TY9JBCHcM7g+xOyJ7FdcB36CnFQVye/yAm3fcXWxIDQc1Sj22x3WDgymR1pWi2iRziM9b99+z3q9Ybu7Y7/fcTzsyPKMuipomoN/d7EujYNTc2AYOpr26EFQPMYrTmCwnEVhGdqj73oyeMeoQdkEuz4Uz/gkg2EYIoGlDRm6vkGhqCspLGm7jrquxc0ClFWBVRJGW9RLmuaE1orVciUWvg5QEsR1JOFgo6Iv5rFkbPh5tWJtat+2O9KVcyLyAtPzPkrtjYigOkke2tPxHgdYNROXMYnN+/e1ClEAX4AQsIW8LFcJ1aYWzORBiesiOurGLO5EJ9CR0MYvfXhNKd9yz7E/HLnYaGkkUZaUdUHXWDBSwaS05uLyksViwffvBLZ86ASKabO5wjnY7ba0zX4U/fFQo2thGHBNurPDFnMe+txKMa0b431hUsOGxHlHpd/AwzAImMrge2nmgtajnODXSi6cJCQWuRS3OAVlLbUGImIztMpB+bRxI6qNsUOsaEqXYBRjEue0niOLH1JgHlLEzNATJDDD0F4w81VPwpntyCH9mwf1yvqXd85RaFEVzhb3aseMOkcLLgg85wzYDKVDhcr0OBtc9btMnMl+wR6dH1VGFEEHlDGYvqd3jtPBSeFtUdB4fH4BJcl58fwZL1684OPte95+/x1de8IaEU99d+J4OrBeX7Dfd964GBXY8GhjTLTGIGXOKu4x00+La5RSArfOqDiP3oFRd1Fa0/dNvCZwrcNxx3KxIq8qDs0J5xQXaoNTiv5hB8ButxMu/UyyOpTrcVZqHrRKE+cf85a5L8skhAKp159oMYbrQhc7zSgenQvtFGVe7JxiPOFansAni/mW6UyRQBQQiGIUhWPFt4urEl4s6CbRYAhca7zbuWHgCLDuEmBeLBR5pkT/6Toe7rYoJQXEVVVxc3PDer3hcNzz+vU3viOckUZXStH51J++v0MpF+GmwljHkIqbTJoeMZPiqK1yk/kI40Ux1kF47hWgnowxwrm887PrOyqko/Cp63BtxmUlqd+mN5Jl4hzWuyMWiwVa51xdXvCwvZsSzkyvTY2t+VoIs5FVTgt8nRsXPFwf3slZN0mrJuF+lsdAxlGHd4+/e7zQqTHAzM8VFYHRCkxR18JOUXh2m9zp3G5Lj7SeQHBiM/bHhs6nuGgVsCpyikzz/PlzFqslRVXy+rvv0JnUVpqhQynHMHTgDOvNkizkh50R7fEzR0QfjM5NraZczf8TutYG8Yl1qLAgRS4N5xHYzOA5D07Uru+wTvo9te2Rh4d7uqFHlZIa1HatwBEYy6pecHmxZrPZsNvvGUKjW2/yWWP9z8iV03dyzid3hjI5CxmZdPe2/kVgkvOvGN1OwdqPnNk7ZPXsObJ+IwPS52KUjw8Xf6Ll4n0x1uNNSNB41FvCAOOOcONncRCPnWPTBbcW54w008pyqYHMq4ly23U9KhNguqurK/b7LXkmBRf397e+PmBsNaMcUXeZlIAlOqXym2QcO37nhk3nCO6coEcqpSS5EfFFheul4FXjjOBcODumC4naIWAxINZn15+4f7gFrCRlOlHOV+sF6/Way6srrLW0XTPOl0rXZZQyzuIJTsUf5dHsJOFZxXLAOOfnLMzkkHUTdd8GI+5XHDpeOLtRuNn4XQC8m+JlRBC85B7nOMTZ41cQeHh2UZSJjqQEn8v7wayz6Dxjvd7Qtid2u3sOh3vevPkGZoq5UpLmEyp8Ju/uHj/3iVE9+i3kujkHBjA6EZmAweEyja4KBqSrnvHNHAJnCFZjnmu6/sT7D2/QueLZs2cUi4pyuWBxsWaxWHB/f8/pdPB6pPMWb+A00znXITnPje8l8+E3gE7dUlMQlrnP89w6nuNe8/N0+uHcnTElUplIpZikdkCqqofPpkrxtDD0Uzslfdnxvjqk3RSF37E+CdBjiD17dsN6veT1t7/ku+9ec3d7K7j6g+8WMgwM1sS6gLOTlIzJJsUx+J/5xClvWYnPbXJHjP87Q4FOuKhX9FOuGQqScU70NZUJfGfvePP2Daf2xOXFJRebDZnS7Pd7bu9u6fvOtzSUZRQ9yUsclY4mmGp4w8SXF44yBZ1nolfpqd4dUsnjXJxZw/lannPm5pOTgt7uhzfZs1HGzxfBL5ArgMFjLWTxBYPITAeUfiafh6kIdsN4HghUkzHS/AEUTTOQ59Lq78WLV7x48ZJvv/2Gd+/eYMxocaXvVpdVTNd+lEgZxuTHpXRi7s84dHBNjAGRaeImDpSR6wYc+CYXDgnuD2YAX9GjtEKFOlLjQFm6U0NZV6jMcTzs+OoXP2Vzfc1nr37AcrEmz3LJ7GjbOI5QPR/aM4eEgaByhPx750SnDrHFsKGDby8YAUqpMe/fGwMpQ5nYg7O1RIWsmnHNn0S/9pd783i6NSTa7+LCqDDvEwTa4KSdWjvnFjgxSJMjBGwH2vbIarVhGHy1UpaxXqz4/Nc+58Xz5xyOW15/+zXGdhLATvCzwq4L3Xmf2nmPiOmTIjMd5uw8NQLxBnz8kHbdNI3sbC0GwEioI5fQmcRks6JAKuRb7u/e83B3z3p9wc3Nc5qTOKDHAXt9y6+t85p66qOM6tvoZo8iNFqCSmGHscAl3n68eHKPcQrG9XWJGyS8UwwrqUCBo96OKLVjUnZw4IVqbe0VQOVz3zOVCbdxEhZB20RZHh86Z7XnxehoLGgUz66usUp7/LCSly9ecX1zye7hjp/+9M9p2iN2MNgZF5tYOVo/Qo5Ox4Wa+oHi50yJMA2djdkKYS5kGQYcuVMUWcaAnbRmthA97LJBBTlJa/E3SYmbIysrHBpjeqzpMNtOIKb6o+dSwfzVPp1aQnzBIInz7adfnO0eXCWxPqPbxhMYQGhqrj0IzrljHkIEJj7WQEp5aAjlPPmlojLlMKluL/9qr6N5i8ULfUn/CBM6srewiIqpbyYw4GARzRmDtVLZ07cDl8+fsVzWlHlNvVjw7etvef36LzkethJMDpZgckRfzoy7hX8nnPUJO2QSmmFqEEm4iWT3y+dGQeYcWZ4lld7MGumGe4zzgJKGrcoKBKizjsVqzWAFdfFweEBlo9XrjWyi+1dJXcTEpaASdICEe8ZNrBRE+M5kUyk+SWBnK5tcQmhKKCk/nk5c+HL+cL9MjUSR3FW+tkwJIjQYcBZrBGbKKcETk401uuqiqqnSyVWM8UsIYhLCLrNgBw6nPepO050EN/bu7iMPu1uG/pSgND6ekLkYTI/UIAmOZOHACRmMdEXgEtGHl/aV8Up1ODNTYkG2fSfEPL6dF2nEs128N15fk3sEmKfmGDI4BixGqr5DKMuOye9WhVqI8d2VUpErhdlWWlwqcV7sjPMpFR3o8aJw7kRXOz+v8XPg1HbkbTdMb5hO+hPHxHUBkaOl3FBeViZOB9EQCPWMCEr/nj/fDgNNe6BpD2gPu2TsAMpghl7EN0nG5r+qPjUbw1wXUeF/bnp+fI5XiAOXSLmzvHe6UCp+HgnKz4UKYmN86iP+cTqdgFm1djSQxhT2MH+Td1PEtJw0EP4py/Dc4VIjINHnHh3RioLddo/6a3/td9z7d9+B7/wBoAKYmkoGHDhMsksmd0smSOkkVMGUnY4GqVT/BLl8ziR2Tgp1M6XJihxZJBtZdRC9+B7cqSiYT/ZcwZ1P9OS5KQOPYxxFUzw/6KR+XlJis4zvpJWPa0Yuo2JBTchscCGd3Y5pU+cWMxBZ+lkMwk+IIaxZoh5YEZc6jHc215NVDGOdcK4wx4nide5QgFEobVldvUT/0R/9kedAkm8UQiBeY/cPCH+HO0giY6pgBt+W0inRnXkRJ5wver7PHClxWGsZrPFB8BB77JOe3MGnF1b/X21Hzr3c8Wd+TpTxyeufG6/yxBmI2evkyvnFjVPqU5Ydo6XnnOf63veGwC/JfRN/5ExEhR8TfG3p3MkficEhYHeBRub3COeE93j0roE7RDr49KGUIssz/uv/5r9FdV3j/spv/joO6V4bM3oYHxQGdt7jL19GjqBtmG0/y1NO4bUNJIKQiV0wY+/nxJ3oCY9FqR5pC+vxW89NQuQqcz3s3POS1nqPRDqjjylwssn3UYebX5feXlYy3DvF309vpxMuGfTBxyhIT6sGKUGNitvj8+JcBO75ieNXbeD0yIqcP/1//inaGMvDw97HHT91ozF294j6k3PkJ2P0k83wZtXkH3+PqXh79ORgYifoNOO5oZJdPVrY9Pq5u+RTz5tfMx/nJz+b6XXhcEh1kFNMuM5kDPML45g//T5PjS2It5GLMoYxSRQdP+ZfRWDhub9K5xWJYPjw8Tb0Zs/5n/7e35PGBcE3FexsNR381Nj0u9CzdNBeVCrEBB1BctMGEzKA0aM052JPWYlaa5RT6CRxRJ6qQz+J8cMnJki+dhOdazIxyXNVYvafu96F7zzUeNRZA+efZZomdx45RphjRtGqJs9/rEvG+ROFLt7/3GHdY+swUlvgBwoPjXp+3ubqxKeel85dnuf8d//D/yjrNgyD2x/2vHr1kpurDcqNuPtxcsOYfF5TKmak6AQgA9Wf3f1Tl0XYP9mox50Z5Pw+WmuUR42xyscCtaZwAhI87x4bxzcnEjW+07miliA2Ytbn7JTUskw3iAo6mX9Fh4tl/UqldYvyb7rxglsDiI5apZQEvpWUqU3m3DlC1nuKpv6rvAL+JL87x039FP5++H4yN7PP5vOrMylmef3dW/7iL79juVyglVIs6gX/6Mf/mDzPI4TQ9GHTB04WUzLw4jnzh0s671ykjO87f4Hz8zLdRcGPEx2QZyYjXZQ50Qbl+9wx4WBndm/qFpiOL5mCZLHTBXTpZel4kudrYERzmeJUPFIZmKsuqdR5LFLDs0KbnLnxM5mfT6gKT/0uFxvINP/z3//fqBcLlPKyR2vNF198Qdv00/OTyfG3TNj2mHgorzvG0gIRyAAE/nJCaH5FRjcIj154TjRxssPETgc6cTjOOVpYnOm4plGAcJz77Nx9g0UZudjkmeO7PtWHaLqQ09oGZZ2EOhJ15ewmPKOrpeMLP3P3jMuUB/R7vPmAR5LsKc527rldZ7jfNvz2j/6NKCm0UgKAdn19zT/+P/+UrKwIzQykBxGoYAkqifAbM8SNqHyWpVYFipwAPRR3nxKnqbAHhUOjEWytwDVmI47IfCNxmshhFeJzyoA8gl2Jr11bb20mOz/VZwDxdH9CpXCM6M2oMen83CKLWZOEspwDZ3HKRE6DIjZOGPUxvKhTk7CWiGoHGnSehLvU+H3kkOPHca4DZPx0OmdqByp4qyLHns9Reo+nJIwi2QxOnPNWOerlgv/l7/8D1uvNONZQffLHf/zH/PjH/we///t/CDpjcAMOSV1On+McSWzMxgcmj37yJUczOdzrU9YdXlwBZL47riccK2nGzkbjPyrdYRhzEZmKr5C6HbqzzYcfcdfcWJ3zqQlP58Z7wcaXiCJrfDERm6HHUwhKu3HsWsX3BaK3fjo/0zjvfGxPGVBBDeCJ78M5n7JeRY+TphlKCTh/lgvMwt/5L/8rfv3X/0rcPEoplPWwe6EIdbfb87/+yT/g7/wX/zmff3ZD15xQaDJdxBePCruC0ecV9pYTX9n49sF9Sywp9Tj4RMV/KpY9aLsPgQQEv0EakeqxDF9uH6p0xiiD1Uwcneeq2s8eilGEzDrUTquyZ0cCY+nSsT1aQ+UzZp1wMT+HgUPLTDzmRIGxphJC+05vKrG2J/rfI72aR5+nx68yGs4TuWyGvKx4/+EDf/fv/vf8h3/rb1N5qNSU6wIC85jnBVdXF/xn/+l/wp/92U/44rd/h/cftuz30s9bdJuJ+hrmLnn440HFFw07yCu1cunTfhe5JnDHQCwu4S7nOao8ZpzMlP3PRcHcQIg3SM+ZjelJc16F/6mz8Fti1QqBqSc5yePFn4835RIp4YXKqLR24VPK/fz+c3fFOZ0unGeMYX888v5uyw9+4wv+5H//U/6jv/0fe2UfGL3MW1veAAAAB0lEQVRw/H+b2Lc3Rg46LgAAAABJRU5ErkJggg==" y="-21.499943"/>
</g>
<g id="text_1">
<!-- Train Image -->
<defs>
<path d="M -0.296875 72.90625
L 61.375 72.90625
L 61.375 64.59375
L 35.5 64.59375
L 35.5 0
L 25.59375 0
L 25.59375 64.59375
L -0.296875 64.59375
z
" id="DejaVuSans-84"/>
<path d="M 41.109375 46.296875
Q 39.59375 47.171875 37.8125 47.578125
Q 36.03125 48 33.890625 48
Q 26.265625 48 22.1875 43.046875
Q 18.109375 38.09375 18.109375 28.8125
L 18.109375 0
L 9.078125 0
L 9.078125 54.6875
L 18.109375 54.6875
L 18.109375 46.1875
Q 20.953125 51.171875 25.484375 53.578125
Q 30.03125 56 36.53125 56
Q 37.453125 56 38.578125 55.875
Q 39.703125 55.765625 41.0625 55.515625
z
" id="DejaVuSans-114"/>
<path d="M 34.28125 27.484375
Q 23.390625 27.484375 19.1875 25
Q 14.984375 22.515625 14.984375 16.5
Q 14.984375 11.71875 18.140625 8.90625
Q 21.296875 6.109375 26.703125 6.109375
Q 34.1875 6.109375 38.703125 11.40625
Q 43.21875 16.703125 43.21875 25.484375
L 43.21875 27.484375
z
M 52.203125 31.203125
L 52.203125 0
L 43.21875 0
L 43.21875 8.296875
Q 40.140625 3.328125 35.546875 0.953125
Q 30.953125 -1.421875 24.3125 -1.421875
Q 15.921875 -1.421875 10.953125 3.296875
Q 6 8.015625 6 15.921875
Q 6 25.140625 12.171875 29.828125
Q 18.359375 34.515625 30.609375 34.515625
L 43.21875 34.515625
L 43.21875 35.40625
Q 43.21875 41.609375 39.140625 45
Q 35.0625 48.390625 27.6875 48.390625
Q 23 48.390625 18.546875 47.265625
Q 14.109375 46.140625 10.015625 43.890625
L 10.015625 52.203125
Q 14.9375 54.109375 19.578125 55.046875
Q 24.21875 56 28.609375 56
Q 40.484375 56 46.34375 49.84375
Q 52.203125 43.703125 52.203125 31.203125
z
" id="DejaVuSans-97"/>
<path d="M 9.421875 54.6875
L 18.40625 54.6875
L 18.40625 0
L 9.421875 0
z
M 9.421875 75.984375
L 18.40625 75.984375
L 18.40625 64.59375
L 9.421875 64.59375
z
" id="DejaVuSans-105"/>
<path d="M 54.890625 33.015625
L 54.890625 0
L 45.90625 0
L 45.90625 32.71875
Q 45.90625 40.484375 42.875 44.328125
Q 39.84375 48.1875 33.796875 48.1875
Q 26.515625 48.1875 22.3125 43.546875
Q 18.109375 38.921875 18.109375 30.90625
L 18.109375 0
L 9.078125 0
L 9.078125 54.6875
L 18.109375 54.6875
L 18.109375 46.1875
Q 21.34375 51.125 25.703125 53.5625
Q 30.078125 56 35.796875 56
Q 45.21875 56 50.046875 50.171875
Q 54.890625 44.34375 54.890625 33.015625
z
" id="DejaVuSans-110"/>
<path id="DejaVuSans-32"/>
<path d="M 9.8125 72.90625
L 19.671875 72.90625
L 19.671875 0
L 9.8125 0
z
" id="DejaVuSans-73"/>
<path d="M 52 44.1875
Q 55.375 50.25 60.0625 53.125
Q 64.75 56 71.09375 56
Q 79.640625 56 84.28125 50.015625
Q 88.921875 44.046875 88.921875 33.015625
L 88.921875 0
L 79.890625 0
L 79.890625 32.71875
Q 79.890625 40.578125 77.09375 44.375
Q 74.3125 48.1875 68.609375 48.1875
Q 61.625 48.1875 57.5625 43.546875
Q 53.515625 38.921875 53.515625 30.90625
L 53.515625 0
L 44.484375 0
L 44.484375 32.71875
Q 44.484375 40.625 41.703125 44.40625
Q 38.921875 48.1875 33.109375 48.1875
Q 26.21875 48.1875 22.15625 43.53125
Q 18.109375 38.875 18.109375 30.90625
L 18.109375 0
L 9.078125 0
L 9.078125 54.6875
L 18.109375 54.6875
L 18.109375 46.1875
Q 21.1875 51.21875 25.484375 53.609375
Q 29.78125 56 35.6875 56
Q 41.65625 56 45.828125 52.96875
Q 50 49.953125 52 44.1875
z
" id="DejaVuSans-109"/>
<path d="M 45.40625 27.984375
Q 45.40625 37.75 41.375 43.109375
Q 37.359375 48.484375 30.078125 48.484375
Q 22.859375 48.484375 18.828125 43.109375
Q 14.796875 37.75 14.796875 27.984375
Q 14.796875 18.265625 18.828125 12.890625
Q 22.859375 7.515625 30.078125 7.515625
Q 37.359375 7.515625 41.375 12.890625
Q 45.40625 18.265625 45.40625 27.984375
z
M 54.390625 6.78125
Q 54.390625 -7.171875 48.1875 -13.984375
Q 42 -20.796875 29.203125 -20.796875
Q 24.46875 -20.796875 20.265625 -20.09375
Q 16.0625 -19.390625 12.109375 -17.921875
L 12.109375 -9.1875
Q 16.0625 -11.328125 19.921875 -12.34375
Q 23.78125 -13.375 27.78125 -13.375
Q 36.625 -13.375 41.015625 -8.765625
Q 45.40625 -4.15625 45.40625 5.171875
L 45.40625 9.625
Q 42.625 4.78125 38.28125 2.390625
Q 33.9375 0 27.875 0
Q 17.828125 0 11.671875 7.65625
Q 5.515625 15.328125 5.515625 27.984375
Q 5.515625 40.671875 11.671875 48.328125
Q 17.828125 56 27.875 56
Q 33.9375 56 38.28125 53.609375
Q 42.625 51.21875 45.40625 46.390625
L 45.40625 54.6875
L 54.390625 54.6875
z
" id="DejaVuSans-103"/>
<path d="M 56.203125 29.59375
L 56.203125 25.203125
L 14.890625 25.203125
Q 15.484375 15.921875 20.484375 11.0625
Q 25.484375 6.203125 34.421875 6.203125
Q 39.59375 6.203125 44.453125 7.46875
Q 49.3125 8.734375 54.109375 11.28125
L 54.109375 2.78125
Q 49.265625 0.734375 44.1875 -0.34375
Q 39.109375 -1.421875 33.890625 -1.421875
Q 20.796875 -1.421875 13.15625 6.1875
Q 5.515625 13.8125 5.515625 26.8125
Q 5.515625 40.234375 12.765625 48.109375
Q 20.015625 56 32.328125 56
Q 43.359375 56 49.78125 48.890625
Q 56.203125 41.796875 56.203125 29.59375
z
M 47.21875 32.234375
Q 47.125 39.59375 43.09375 43.984375
Q 39.0625 48.390625 32.421875 48.390625
Q 24.90625 48.390625 20.390625 44.140625
Q 15.875 39.890625 15.1875 32.171875
z
" id="DejaVuSans-101"/>
</defs>
<g transform="translate(48.199347 16.318125)scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-84"/>
<use x="46.333984" xlink:href="#DejaVuSans-114"/>
<use x="87.447266" xlink:href="#DejaVuSans-97"/>
<use x="148.726562" xlink:href="#DejaVuSans-105"/>
<use x="176.509766" xlink:href="#DejaVuSans-110"/>
<use x="239.888672" xlink:href="#DejaVuSans-32"/>
<use x="271.675781" xlink:href="#DejaVuSans-73"/>
<use x="301.167969" xlink:href="#DejaVuSans-109"/>
<use x="398.580078" xlink:href="#DejaVuSans-97"/>
<use x="459.859375" xlink:href="#DejaVuSans-103"/>
<use x="523.335938" xlink:href="#DejaVuSans-101"/>
</g>
</g>
</g>
<g id="axes_2">
<g clip-path="url(#pf02e2d733d)">
<image height="153" id="imageb081ed1ee7" transform="scale(1 -1)translate(0 -153)" width="153" x="189.818182" xlink:href="data:image/png;base64,
iVBORw0KGgoAAAANSUhEUgAAAJkAAACZCAYAAAA8XJi6AAAABHNCSVQICAgIfAhkiAAADEVJREFUeJzt3V9MW2UfB/Bv2xUKsm44Fkt0Gpdtzo3ojM4MnasJ4mYUsmxqvMBsmYlx80+miboLY7hxGnahhkVNVDSMhKibYEC2MmAONkWQbR3IYIVRkilsDAoUO1tOe96LveN9m7bQQp/zPE/5fRKS9ZyTnm/gu/Ocnp4/usLCQhUAkpOTsW/fPoTz2Wef4dVXXw07j8xOTk4O6urqQqZXVlbCbrdzSMSOnncAEmzr1q1YuXIl7xhxNVWypKQknjlIAtMDgNlsxltvvRV2Aa/XC7fbrWmo+WBychLj4+O8Y2hCn56ejjfffDPszOvXr+Pzzz+PuK9GZq+xsREvvfRS2HlpaWkwGAwaJ2JH/8Ybb4Sd4fF48O2330YsIGEnPz8ft99+O+8YcRN2x9/j8aC0tBR79uzROg9JQGFL1tPTg927d2udhSQoOoQhqOXLl8NkMvGOERdUMkFZrVYsWrSId4y4CCnZ6OgoSkpKeGQhCSqkZCMjI/j00095ZJl32tvbceTIEd4xmAsqmdvtxnvvvccry7zT3d2No0eP8o7BXFDJPB4PysvLeWUhCYp2/AlzVDLOqqqqcPDgQd4xmKKScXb16lU4nU7eMZiikhHmqGQC27lzJxYvXsw7xpxRyQRmMpmg0+l4x5gzKhlhjkpGmAsqmd/v55VjXgsEAlBVlXcMZqZKNjY2llBnY8rk448/xgcffBB23oIFCzROE380XApuz549SEtL4x1jTqhkhDkqGWGOSiaBjIwMqY+X6QFAVVU0NzfzzjKv9ff3Y2BgIOy8HTt2SH2Fvx4AfD4ftmzZwjvLvPbVV1+hoqKCdwwmaLiUxJo1a6QdMqlkAmlsbMTFixfDzsvPz4deL+efS87UCeq7775DS0sL7xhxRyWTSE5ODu8Is0Ilk0h2draU+2VUMsEUFxfjzJkz0y7z3HPPYfv27RolmjsqmWBaWlrw999/R5xfUFCANWvWICsrCy+88IKGyWaPSiaZ5cuXT/377rvv5pgkelQyiRmNRuzatYt3jBlRyQS0a9euqA5l6HQ6LFmyRINEc0MlE9DQ0BC8Xi/vGHFDJZNcamqq8LddpZIlANHP0KCSCcpqteL8+fNRLbto0SKh7/E7VbJEum98Ioj16iWRvwnQAzce3jUyMgKTyTT1I/ommATT6XTCbiimtmRmsxnXr1+f+jl58iQWL16M1NRUnvnmtfHxcQQCgaiWzcjIwM6dO9kGmqWI+2QbNmyAy+XCoUOHYDabtcxE/uuxxx5DX18f7xhzNuOVo9u2bYPH40FhYeHUtMHBQfzzzz8sc5EEEtXlyQUFBSgoKJh6/c477+D48eNRrUBRFHR0dMwuHUFnZyfuuusuqa8k16mMb8IwMTGBp556Cv/++y/++OMPlqtKWAMDA7BYLDMud/nyZXz99dcaJIqN3uFwwOFwoLe3l8kK0tLS0NTUhJ9++glPP/00NmzYwGQ9iayurg6Kosy4XEpKipD3M9HdfAa5wWBAfn4+gBthWT2iuKurC7t378Yvv/zC5P0T1ejoaFSPwent7UVZWZkGiaI39enS7/ejoqICFRUVqKqqwoULF5iscPXq1SguLsYTTzzB5P2JeMIewnC73bDZbGhsbERXV1fcV5qVlYUDBw5g8+bNcX9vIh7D448/XhhuhtfrhdPpxODgIEZHR6GqKm699da4rdhisSArKwtOp5PZ/mAimZiYwJNPPjnjtZculyvq7zy1MuPn4uHhYfz222+4dOkSOjs7Y3pzvV6PZ555JuL8devW4dFHH4XNZovpfeejgwcP4sCBA1Ieyog68ZUrV3DlypWY3lyn00FRFGzdujXiMs8++yxaWlpQXV0d03sTeUQcLuNlaGgI/f396Ovrw+rVq0PmL126FOvXr4fD4UBPTw/LKNL79ddf8eKLL057xkVKSgoMBgP6+/s1TDY95tveQCCAvr4+GAwGXLt2DRaLJWQIXblyJTIzM1lHkd6JEydmPAUoJSUFS5cu1ShRdDQ7adHv9+Ovv/7CuXPnwg6N+/fvR25urlZxpLV27VreEWKm+Zmxfr8fbrc7ZLrFYpH+Brxa6O7unnGZe+65R6j7ZnA5/bqnpwc1NTU8Vj0vGI1GZGdnw2q18o4CgFPJAoEA2traUFtbGzS9vLxcmF+MyKI5v89gMGDTpk145JFHNEg0PW4XkgQCgZAnoCQnJ0t7ozctRXsun16vF+L3yTVBuE9KIvxSSHxx/Yu2trbixIkTQdPq6urw0EMPcUqUeER4ZpN831GQqKiqilOnTuH06dO8o4hZsoULF0Kv10d9pc58E+lpvqqqTt1Dw263o6GhQctYEXEvmc/nw+TkJIxG49S0hoYGWK1WnDp1iooWxvDwcNBrVVUxMTGBgYEBlJeXc0oVGfeSNTc3w2w2Izs7O2j6yZMnsWzZMly+fJlTMnmMj4/jk08+4R0jIvooR5gTomQulwsTExO8YxBGhChZa2srHA5HyPTc3NygfTUC5OXl8Y4QMyFKFklJSQkWLlzIO4ZQZHzIlzAl6+vrg8vlCpn+8ssvS3nKMfkfYUrW3t6OwcHBkOkffvghkpOTOSQi8SJMyQDg3LlzGBkZ4R1DWEVFRVJ+tytU4osXL2JsbIx3DGG9/vrrIef3+3y+kFOmREM7OxJTFAWHDx8O+8lcJEJtyQCgtrY25GuTmpoaYW9VyUsgEEBZWZnwBQMELNng4GDIgxI2bdok9I13eVBVVajL3qYjXMlI4qGSEeaELFlpaSkdykggQpbM6/WGnEc2NjZG+2WSErJk4dDzBOQlTcmIvKhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5IR5qhkhDkqGWGOSkaYo5JJymAw4O233+YdIypUMonJcnNAKhlhjkomkcnJSd4RZoVKJhGz2Rzy+EYZUMkIc1QywhyVjDBHJSPMUckIc1QywhyVjDBHJSPMUckIc1QywpwcX+PP0d69e/Hggw/yjhGipaUFxcXFvGMwl/Al27t3L959911YLBbeUULk5OQgNzcXFRUV+Oabb3jHYSahS/baa69h3759uO2223hHCSszMxN5eXl44IEHoCgKDh06xDsSEwm7T/bKK6/g/fffF7Zg/++OO+5AUVERnn/+ed5RmEjILdmOHTuwf/9+pKen844SNYvFgiVLlvCOwURClWzLli0oKSnBLbfcArPZzDtOzD766CNcvXoVR44c4R0lroQdLr/88ku43e6ol9+4cSN++OEHZGZmSlkw4MZJiaWlpdi8eTPvKHEl7JbM5/NBVdWoll23bh3q6+uRlJQUcZmysjI4nc44pYuPe++9F9u3bw+alpqaiqqqKvj9fqxfvx4dHR1T81wul5QPlxW2ZLHQ6/XTFuz7779Hb2+vhomi09HRgaSkJOTl5QVNNxqNMBqNIQ+1N5lMWsaLG2GHy2itWLECbW1tEedXVlbiwoULGiaKzXRba7vdjhUrVkj/dDzpS5acnBx2uqqqOHbsGOx2u8aJYnP27FnYbLaIZXM4HAgEAsjIyAiZJ8tFJVIPl6tWrQraZwFuPABeURQ0NTXh999/55QsNs3NzUhKSoLVag0ZIm8aGhoKeu33+1FUVKRFvDmTtmTp6eno7u4Omd7e3o7KykoOieamsbERRqMR2dnZUu7cT0eq4fLOO+8EAOh0OixbtixkvqIo8Hg8WseKm/r6erS2tkJRFN5R4kqqLZnT6cTatWthMpnC7uz39vaitraWQ7L4sdlsWLBgAe6//34YjcaIy127dk3DVHMjVckA4M8//ww73ev1Ynh4WOM0bPz888/Q6XS47777whZNVVV88cUXHJLNjtDDZX9/f8gD78Px+Xxoa2vD8ePHNUiljerqapw/fz7s0Hnp0iUOiWZP6JL9+OOP8Hq9My43PDycUAW7qbq6GmfOnEFnZ2fQf7aysjKOqWIn/HBpt9vx8MMPR/xo7/V60dXVpXEq7Rw9ehTAjS//p9tHE5nwJbPZbJicnMTGjRtDjnz7fD40NTXh9OnTnNJp59ixY7wjzJrQw+VNDQ0NIUfEFUVBfX39vCiY7ITfkt1UU1MT9DoQCODs2bOc0pBYSFOy6b4EJ2KTYrgkcqOSEeaoZIQ5KhlhjkpGmKOSEeaoZIQ5Khlh7j+IobnQcdL/mQAAAABJRU5ErkJggg==" y="-21.499943"/>
</g>
<g id="text_2">
<!-- Label -->
<defs>
<path d="M 9.8125 72.90625
L 19.671875 72.90625
L 19.671875 8.296875
L 55.171875 8.296875
L 55.171875 0
L 9.8125 0
z
" id="DejaVuSans-76"/>
<path d="M 48.6875 27.296875
Q 48.6875 37.203125 44.609375 42.84375
Q 40.53125 48.484375 33.40625 48.484375
Q 26.265625 48.484375 22.1875 42.84375
Q 18.109375 37.203125 18.109375 27.296875
Q 18.109375 17.390625 22.1875 11.75
Q 26.265625 6.109375 33.40625 6.109375
Q 40.53125 6.109375 44.609375 11.75
Q 48.6875 17.390625 48.6875 27.296875
z
M 18.109375 46.390625
Q 20.953125 51.265625 25.265625 53.625
Q 29.59375 56 35.59375 56
Q 45.5625 56 51.78125 48.09375
Q 58.015625 40.1875 58.015625 27.296875
Q 58.015625 14.40625 51.78125 6.484375
Q 45.5625 -1.421875 35.59375 -1.421875
Q 29.59375 -1.421875 25.265625 0.953125
Q 20.953125 3.328125 18.109375 8.203125
L 18.109375 0
L 9.078125 0
L 9.078125 75.984375
L 18.109375 75.984375
z
" id="DejaVuSans-98"/>
<path d="M 9.421875 75.984375
L 18.40625 75.984375
L 18.40625 0
L 9.421875 0
z
" id="DejaVuSans-108"/>
</defs>
<g transform="translate(249.721278 16.318125)scale(0.12 -0.12)">
<use xlink:href="#DejaVuSans-76"/>
<use x="55.712891" xlink:href="#DejaVuSans-97"/>
<use x="116.992188" xlink:href="#DejaVuSans-98"/>
<use x="180.46875" xlink:href="#DejaVuSans-101"/>
<use x="241.992188" xlink:href="#DejaVuSans-108"/>
</g>
</g>
</g>
</g>
<defs>
<clipPath id="p58ad9a7e6d">
<rect height="152.181818" width="152.181818" x="7.2" y="22.318125"/>
</clipPath>
<clipPath id="pf02e2d733d">
<rect height="152.181818" width="152.181818" x="189.818182" y="22.318125"/>
</clipPath>
</defs>
</svg>
...@@ -6,10 +6,16 @@ ...@@ -6,10 +6,16 @@
在这里PaddlePaddle为大家提供了一篇cv的教程供大家学习: 在这里PaddlePaddle为大家提供了一篇cv的教程供大家学习:
- `图像分类 <./mnist_lenet_classification/mnist_lenet_classification.html>`_ :介绍使用 Paddle 在MNIST数据集上完成图像分类。 - `图像分类 <./mnist_lenet_classification/mnist_lenet_classification.html>`_ :介绍使用 Paddle 在MNIST数据集上完成图像分类。
- `图像分类 <./convnet_image_classification/convnet_image_classification.html>`_ :介绍使用 Paddle 在CIFA10数据集上完成图像分类。
- `图像搜索 <./image_search/image_search.html>`_ :介绍使用 Paddle 实现图像搜索。
- `图像分割 <./image_segmentation/pets_image_segmentation_U_Net_like.html>`_ :介绍使用 Paddle 实现U-Net模型完成图像分割。
.. toctree:: .. toctree::
:hidden: :hidden:
:titlesonly: :titlesonly:
mnist_lenet_classification/mnist_lenet_classification.rst mnist_lenet_classification/mnist_lenet_classification.rst
convnet_image_classification/convnet_image_classification.rst
image_search/image_search.rst
image_segmentation/pets_image_segmentation_U_Net_like.rst
...@@ -11,9 +11,13 @@ ...@@ -11,9 +11,13 @@
**内容简介** **内容简介**
- `快速上手 <./simple_case/index_cn.html>`_ :快速了解Paddle 2的特性与功能。
- `计算机视觉 <./cv_case/index_cn.html>`_ :介绍使用 Paddle 解决计算机视觉领域的案例 - `计算机视觉 <./cv_case/index_cn.html>`_ :介绍使用 Paddle 解决计算机视觉领域的案例
- `自然语言处理 <./nlp_case/index_cn.html>`_ :介绍使用 Paddle 解决自然语言处理领域的案例
.. toctree:: .. toctree::
:hidden: :hidden:
quick_start/index_cn.rst
cv_case/index_cn.rst cv_case/index_cn.rst
nlp_case/index_cn.rst
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# IMDB 数据集使用BOW网络的文本分类\n",
"\n",
"本示例教程演示如何在IMDB数据集上用简单的BOW网络完成文本分类的任务。\n",
"\n",
"IMDB数据集是一个对电影评论标注为正向评论与负向评论的数据集,共有25000条文本数据作为训练集,25000条文本数据作为测试集。\n",
"该数据集的官方地址为: http://ai.stanford.edu/~amaas/data/sentiment/\n",
"\n",
"- Warning: `paddle.dataset.imdb`先在是一个非常粗野的实现,后续需要有替代的方案。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"\n",
"本示例基于飞桨开源框架2.0版本。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d\n"
]
}
],
"source": [
"import paddle\n",
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 加载数据\n",
"\n",
"我们会使用`paddle.dataset`完成数据下载,构建字典和准备数据读取器。在飞桨2.0版本中,推荐使用padding的方式来对同一个batch中长度不一的数据进行补齐,所以在字典中,我们还会添加一个特殊的`<pad>`词,用来在后续对batch中较短的句子进行填充。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Loading IMDB word dict....\n"
]
}
],
"source": [
"print(\"Loading IMDB word dict....\")\n",
"word_dict = paddle.dataset.imdb.word_dict()\n",
"\n",
"train_reader = paddle.dataset.imdb.train(word_dict)\n",
"test_reader = paddle.dataset.imdb.test(word_dict)\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"the:0\n",
"and:1\n",
"a:2\n",
"of:3\n",
"to:4\n",
"...\n",
"virtual:5143\n",
"warriors:5144\n",
"widely:5145\n",
"<unk>:5146\n",
"<pad>:5147\n",
"totally 5148 words\n"
]
}
],
"source": [
"# add a pad token to the dict for later padding the sequence\n",
"word_dict['<pad>'] = len(word_dict)\n",
"\n",
"for k in list(word_dict)[:5]:\n",
" print(\"{}:{}\".format(k.decode('ASCII'), word_dict[k]))\n",
"\n",
"print(\"...\")\n",
"\n",
"for k in list(word_dict)[-5:]:\n",
" print(\"{}:{}\".format(k if isinstance(k, str) else k.decode('ASCII'), word_dict[k]))\n",
"\n",
"print(\"totally {} words\".format(len(word_dict)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 参数设置\n",
"\n",
"在这里我们设置一下词表大小,`embedding`的大小,batch_size,等等"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"vocab_size = len(word_dict)\n",
"emb_size = 256\n",
"seq_len = 200\n",
"batch_size = 32\n",
"epoch_num = 2\n",
"pad_id = word_dict['<pad>']\n",
"\n",
"classes = ['negative', 'positive']\n",
"\n",
"def ids_to_str(ids):\n",
" #print(ids)\n",
" words = []\n",
" for k in ids:\n",
" w = list(word_dict)[k]\n",
" words.append(w if isinstance(w, str) else w.decode('ASCII'))\n",
" return \" \".join(words)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"在这里,取出一条数据打印出来看看,可以对数据有一个初步直观的印象。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[5146, 43, 71, 6, 1092, 14, 0, 878, 130, 151, 5146, 18, 281, 747, 0, 5146, 3, 5146, 2165, 37, 5146, 46, 5, 71, 4089, 377, 162, 46, 5, 32, 1287, 300, 35, 203, 2136, 565, 14, 2, 253, 26, 146, 61, 372, 1, 615, 5146, 5, 30, 0, 50, 3290, 6, 2148, 14, 0, 5146, 11, 17, 451, 24, 4, 127, 10, 0, 878, 130, 43, 2, 50, 5146, 751, 5146, 5, 2, 221, 3727, 6, 9, 1167, 373, 9, 5, 5146, 7, 5, 1343, 13, 2, 5146, 1, 250, 7, 98, 4270, 56, 2316, 0, 928, 11, 11, 9, 16, 5, 5146, 5146, 6, 50, 69, 27, 280, 27, 108, 1045, 0, 2633, 4177, 3180, 17, 1675, 1, 2571] 0\n",
"<unk> has much in common with the third man another <unk> film set among the <unk> of <unk> europe like <unk> there is much inventive camera work there is an innocent american who gets emotionally involved with a woman he doesnt really understand and whose <unk> is all the more striking in contrast with the <unk> br but id have to say that the third man has a more <unk> storyline <unk> is a bit disjointed in this respect perhaps this is <unk> it is presented as a <unk> and making it too coherent would spoil the effect br br this movie is <unk> <unk> in more than one sense one never sees the sun shine grim but intriguing and frightening\n",
"negative\n"
]
}
],
"source": [
"# 取出来第一条数据看看样子。\n",
"sent, label = next(train_reader())\n",
"print(sent, label)\n",
"\n",
"print(ids_to_str(sent))\n",
"print(classes[label])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用padding的方式对齐数据\n",
"\n",
"文本数据中,每一句话的长度都是不一样的,为了方便后续的神经网络的计算,常见的处理方式是把数据集中的数据都统一成同样长度的数据。这包括:对于较长的数据进行截断处理,对于较短的数据用特殊的词`<pad>`进行填充。接下来的代码会对数据集中的数据进行这样的处理。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(25000, 200)\n",
"(25000, 1)\n",
"(25000, 200)\n",
"(25000, 1)\n",
"<unk> has much in common with the third man another <unk> film set among the <unk> of <unk> europe like <unk> there is much inventive camera work there is an innocent american who gets emotionally involved with a woman he doesnt really understand and whose <unk> is all the more striking in contrast with the <unk> br but id have to say that the third man has a more <unk> storyline <unk> is a bit disjointed in this respect perhaps this is <unk> it is presented as a <unk> and making it too coherent would spoil the effect br br this movie is <unk> <unk> in more than one sense one never sees the sun shine grim but intriguing and frightening <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>\n",
"<unk> is the most original movie ive seen in years if you like unique thrillers that are influenced by film noir then this is just the right cure for all of those hollywood summer <unk> <unk> the theaters these days von <unk> <unk> like breaking the waves have gotten more <unk> but this is really his best work it is <unk> without being distracting and offers the perfect combination of suspense and dark humor its too bad he decided <unk> cameras were the wave of the future its hard to say who talked him away from the style he <unk> here but its everyones loss that he went into his heavily <unk> <unk> direction instead <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>\n",
"<unk> von <unk> is never <unk> in trying out new techniques some of them are very original while others are best <unk> br he depicts <unk> germany as a <unk> train journey with so many cities lying in ruins <unk> <unk> a young american of german descent feels <unk> to help in their <unk> it is not a simple task as he quickly finds outbr br his uncle finds him a job as a night <unk> on the <unk> <unk> line his job is to <unk> to the needs of the passengers when the shoes are <unk> a <unk> mark is made on the <unk> a terrible argument <unk> when a passengers shoes are not <unk> despite the fact they have been <unk> there are many <unk> to the german <unk> of <unk> to such stupid <unk> br the <unk> journey is like an <unk> <unk> mans <unk> through life with all its <unk> and <unk> in one sequence <unk> <unk> through the back <unk> to discover them filled with <unk> bodies appearing to have just escaped from <unk> these images horrible as they are are <unk> as in a dream each with its own terrible impact yet <unk> br\n"
]
}
],
"source": [
"def create_padded_dataset(reader):\n",
" padded_sents = []\n",
" labels = []\n",
" for batch_id, data in enumerate(reader):\n",
" sent, label = data\n",
" padded_sent = sent[:seq_len] + [pad_id] * (seq_len - len(sent))\n",
" padded_sents.append(padded_sent)\n",
" labels.append(label)\n",
" return np.array(padded_sents), np.expand_dims(np.array(labels), axis=1)\n",
"\n",
"train_sents, train_labels = create_padded_dataset(train_reader())\n",
"test_sents, test_labels = create_padded_dataset(test_reader())\n",
"\n",
"print(train_sents.shape)\n",
"print(train_labels.shape)\n",
"print(test_sents.shape)\n",
"print(test_labels.shape)\n",
"\n",
"for sent in train_sents[:3]:\n",
" print(ids_to_str(sent))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 组建网络\n",
"\n",
"本示例中,我们将会使用一个不考虑词的顺序的BOW的网络,在查找到每个词对应的embedding后,简单的取平均,作为一个句子的表示。然后用`Linear`进行线性变换。为了防止过拟合,我们还使用了`Dropout`。"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [],
"source": [
"class MyNet(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(MyNet, self).__init__()\n",
" self.emb = paddle.nn.Embedding(vocab_size, emb_size)\n",
" self.fc = paddle.nn.Linear(in_features=emb_size, out_features=2)\n",
" self.dropout = paddle.nn.Dropout(0.5)\n",
"\n",
" def forward(self, x):\n",
" x = self.emb(x)\n",
" x = paddle.reduce_mean(x, dim=1)\n",
" x = self.dropout(x)\n",
" x = self.fc(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 开始模型的训练\n"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch: 0, batch_id: 0, loss is: [0.6926701]\n",
"epoch: 0, batch_id: 500, loss is: [0.41248566]\n",
"[validation] accuracy/loss: 0.8505121469497681/0.3615057170391083\n",
"epoch: 1, batch_id: 0, loss is: [0.29521096]\n",
"epoch: 1, batch_id: 500, loss is: [0.2916747]\n",
"[validation] accuracy/loss: 0.86475670337677/0.3259459137916565\n"
]
}
],
"source": [
"def train(model):\n",
" model.train()\n",
"\n",
" opt = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n",
"\n",
" for epoch in range(epoch_num):\n",
" # shuffle data\n",
" perm = np.random.permutation(len(train_sents))\n",
" train_sents_shuffled = train_sents[perm]\n",
" train_labels_shuffled = train_labels[perm]\n",
" \n",
" for batch_id in range(len(train_sents_shuffled) // batch_size):\n",
" x_data = train_sents_shuffled[(batch_id * batch_size):((batch_id+1)*batch_size)]\n",
" y_data = train_labels_shuffled[(batch_id * batch_size):((batch_id+1)*batch_size)]\n",
" \n",
" sent = paddle.to_tensor(x_data)\n",
" label = paddle.to_tensor(y_data)\n",
" \n",
" logits = model(sent)\n",
" loss = paddle.nn.functional.softmax_with_cross_entropy(logits, label)\n",
" \n",
" avg_loss = paddle.mean(loss)\n",
" if batch_id % 500 == 0:\n",
" print(\"epoch: {}, batch_id: {}, loss is: {}\".format(epoch, batch_id, avg_loss.numpy()))\n",
" avg_loss.backward()\n",
" opt.minimize(avg_loss)\n",
" model.clear_gradients()\n",
"\n",
" # evaluate model after one epoch\n",
" model.eval()\n",
" accuracies = []\n",
" losses = []\n",
" for batch_id in range(len(test_sents) // batch_size):\n",
" x_data = test_sents[(batch_id * batch_size):((batch_id+1)*batch_size)]\n",
" y_data = test_labels[(batch_id * batch_size):((batch_id+1)*batch_size)]\n",
" \n",
" sent = paddle.to_tensor(x_data)\n",
" label = paddle.to_tensor(y_data)\n",
"\n",
" logits = model(sent)\n",
" loss = paddle.nn.functional.softmax_with_cross_entropy(logits, label)\n",
" acc = paddle.metric.accuracy(logits, label)\n",
" \n",
" accuracies.append(acc.numpy())\n",
" losses.append(loss.numpy())\n",
" \n",
" avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)\n",
" print(\"[validation] accuracy/loss: {}/{}\".format(avg_acc, avg_loss))\n",
" \n",
" model.train()\n",
" \n",
"model = MyNet()\n",
"train(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"\n",
"可以看到,在这个数据集上,经过两轮的迭代可以得到86%左右的准确率。你也可以通过调整网络结构和超参数,来获得更好的效果。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"colab": {
"name": "cifar-10-cnn.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
IMDB 数据集使用BOW网络的文本分类
================================
本示例教程演示如何在IMDB数据集上用简单的BOW网络完成文本分类的任务。
IMDB数据集是一个对电影评论标注为正向评论与负向评论的数据集,共有25000条文本数据作为训练集,25000条文本数据作为测试集。
该数据集的官方地址为: http://ai.stanford.edu/~amaas/data/sentiment/
- Warning:
``paddle.dataset.imdb``\ 先在是一个非常粗野的实现,后续需要有替代的方案。
环境设置
--------
本示例基于飞桨开源框架2.0版本。
.. code::
import paddle
import numpy as np
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
264e76cae6861ad9b1d4bcd8c3212f7a78c01e4d
加载数据
--------
我们会使用\ ``paddle.dataset``\ 完成数据下载,构建字典和准备数据读取器。在飞桨2.0版本中,推荐使用padding的方式来对同一个batch中长度不一的数据进行补齐,所以在字典中,我们还会添加一个特殊的\ ``<pad>``\ 词,用来在后续对batch中较短的句子进行填充。
.. code::
print("Loading IMDB word dict....")
word_dict = paddle.dataset.imdb.word_dict()
train_reader = paddle.dataset.imdb.train(word_dict)
test_reader = paddle.dataset.imdb.test(word_dict)
.. parsed-literal::
Loading IMDB word dict....
.. code::
# add a pad token to the dict for later padding the sequence
word_dict['<pad>'] = len(word_dict)
for k in list(word_dict)[:5]:
print("{}:{}".format(k.decode('ASCII'), word_dict[k]))
print("...")
for k in list(word_dict)[-5:]:
print("{}:{}".format(k if isinstance(k, str) else k.decode('ASCII'), word_dict[k]))
print("totally {} words".format(len(word_dict)))
.. parsed-literal::
the:0
and:1
a:2
of:3
to:4
...
virtual:5143
warriors:5144
widely:5145
<unk>:5146
<pad>:5147
totally 5148 words
参数设置
--------
在这里我们设置一下词表大小,\ ``embedding``\ 的大小,batch_size,等等
.. code::
vocab_size = len(word_dict)
emb_size = 256
seq_len = 200
batch_size = 32
epoch_num = 2
pad_id = word_dict['<pad>']
classes = ['negative', 'positive']
def ids_to_str(ids):
#print(ids)
words = []
for k in ids:
w = list(word_dict)[k]
words.append(w if isinstance(w, str) else w.decode('ASCII'))
return " ".join(words)
在这里,取出一条数据打印出来看看,可以对数据有一个初步直观的印象。
.. code::
# 取出来第一条数据看看样子。
sent, label = next(train_reader())
print(sent, label)
print(ids_to_str(sent))
print(classes[label])
.. parsed-literal::
[5146, 43, 71, 6, 1092, 14, 0, 878, 130, 151, 5146, 18, 281, 747, 0, 5146, 3, 5146, 2165, 37, 5146, 46, 5, 71, 4089, 377, 162, 46, 5, 32, 1287, 300, 35, 203, 2136, 565, 14, 2, 253, 26, 146, 61, 372, 1, 615, 5146, 5, 30, 0, 50, 3290, 6, 2148, 14, 0, 5146, 11, 17, 451, 24, 4, 127, 10, 0, 878, 130, 43, 2, 50, 5146, 751, 5146, 5, 2, 221, 3727, 6, 9, 1167, 373, 9, 5, 5146, 7, 5, 1343, 13, 2, 5146, 1, 250, 7, 98, 4270, 56, 2316, 0, 928, 11, 11, 9, 16, 5, 5146, 5146, 6, 50, 69, 27, 280, 27, 108, 1045, 0, 2633, 4177, 3180, 17, 1675, 1, 2571] 0
<unk> has much in common with the third man another <unk> film set among the <unk> of <unk> europe like <unk> there is much inventive camera work there is an innocent american who gets emotionally involved with a woman he doesnt really understand and whose <unk> is all the more striking in contrast with the <unk> br but id have to say that the third man has a more <unk> storyline <unk> is a bit disjointed in this respect perhaps this is <unk> it is presented as a <unk> and making it too coherent would spoil the effect br br this movie is <unk> <unk> in more than one sense one never sees the sun shine grim but intriguing and frightening
negative
padding的方式对齐数据
----------------------------
文本数据中,每一句话的长度都是不一样的,为了方便后续的神经网络的计算,常见的处理方式是把数据集中的数据都统一成同样长度的数据。这包括:对于较长的数据进行截断处理,对于较短的数据用特殊的词\ ``<pad>``\ 进行填充。接下来的代码会对数据集中的数据进行这样的处理。
.. code::
def create_padded_dataset(reader):
padded_sents = []
labels = []
for batch_id, data in enumerate(reader):
sent, label = data
padded_sent = sent[:seq_len] + [pad_id] * (seq_len - len(sent))
padded_sents.append(padded_sent)
labels.append(label)
return np.array(padded_sents), np.expand_dims(np.array(labels), axis=1)
train_sents, train_labels = create_padded_dataset(train_reader())
test_sents, test_labels = create_padded_dataset(test_reader())
print(train_sents.shape)
print(train_labels.shape)
print(test_sents.shape)
print(test_labels.shape)
for sent in train_sents[:3]:
print(ids_to_str(sent))
.. parsed-literal::
(25000, 200)
(25000, 1)
(25000, 200)
(25000, 1)
<unk> has much in common with the third man another <unk> film set among the <unk> of <unk> europe like <unk> there is much inventive camera work there is an innocent american who gets emotionally involved with a woman he doesnt really understand and whose <unk> is all the more striking in contrast with the <unk> br but id have to say that the third man has a more <unk> storyline <unk> is a bit disjointed in this respect perhaps this is <unk> it is presented as a <unk> and making it too coherent would spoil the effect br br this movie is <unk> <unk> in more than one sense one never sees the sun shine grim but intriguing and frightening <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>
<unk> is the most original movie ive seen in years if you like unique thrillers that are influenced by film noir then this is just the right cure for all of those hollywood summer <unk> <unk> the theaters these days von <unk> <unk> like breaking the waves have gotten more <unk> but this is really his best work it is <unk> without being distracting and offers the perfect combination of suspense and dark humor its too bad he decided <unk> cameras were the wave of the future its hard to say who talked him away from the style he <unk> here but its everyones loss that he went into his heavily <unk> <unk> direction instead <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad> <pad>
<unk> von <unk> is never <unk> in trying out new techniques some of them are very original while others are best <unk> br he depicts <unk> germany as a <unk> train journey with so many cities lying in ruins <unk> <unk> a young american of german descent feels <unk> to help in their <unk> it is not a simple task as he quickly finds outbr br his uncle finds him a job as a night <unk> on the <unk> <unk> line his job is to <unk> to the needs of the passengers when the shoes are <unk> a <unk> mark is made on the <unk> a terrible argument <unk> when a passengers shoes are not <unk> despite the fact they have been <unk> there are many <unk> to the german <unk> of <unk> to such stupid <unk> br the <unk> journey is like an <unk> <unk> mans <unk> through life with all its <unk> and <unk> in one sequence <unk> <unk> through the back <unk> to discover them filled with <unk> bodies appearing to have just escaped from <unk> these images horrible as they are are <unk> as in a dream each with its own terrible impact yet <unk> br
组建网络
--------
本示例中,我们将会使用一个不考虑词的顺序的BOW的网络,在查找到每个词对应的embedding后,简单的取平均,作为一个句子的表示。然后用\ ``Linear``\ 进行线性变换。为了防止过拟合,我们还使用了\ ``Dropout``\
.. code::
class MyNet(paddle.nn.Layer):
def __init__(self):
super(MyNet, self).__init__()
self.emb = paddle.nn.Embedding(vocab_size, emb_size)
self.fc = paddle.nn.Linear(in_features=emb_size, out_features=2)
self.dropout = paddle.nn.Dropout(0.5)
def forward(self, x):
x = self.emb(x)
x = paddle.reduce_mean(x, dim=1)
x = self.dropout(x)
x = self.fc(x)
return x
开始模型的训练
--------------
.. code::
def train(model):
model.train()
opt = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
for epoch in range(epoch_num):
# shuffle data
perm = np.random.permutation(len(train_sents))
train_sents_shuffled = train_sents[perm]
train_labels_shuffled = train_labels[perm]
for batch_id in range(len(train_sents_shuffled) // batch_size):
x_data = train_sents_shuffled[(batch_id * batch_size):((batch_id+1)*batch_size)]
y_data = train_labels_shuffled[(batch_id * batch_size):((batch_id+1)*batch_size)]
sent = paddle.to_tensor(x_data)
label = paddle.to_tensor(y_data)
logits = model(sent)
loss = paddle.nn.functional.softmax_with_cross_entropy(logits, label)
avg_loss = paddle.mean(loss)
if batch_id % 500 == 0:
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
avg_loss.backward()
opt.minimize(avg_loss)
model.clear_gradients()
# evaluate model after one epoch
model.eval()
accuracies = []
losses = []
for batch_id in range(len(test_sents) // batch_size):
x_data = test_sents[(batch_id * batch_size):((batch_id+1)*batch_size)]
y_data = test_labels[(batch_id * batch_size):((batch_id+1)*batch_size)]
sent = paddle.to_tensor(x_data)
label = paddle.to_tensor(y_data)
logits = model(sent)
loss = paddle.nn.functional.softmax_with_cross_entropy(logits, label)
acc = paddle.metric.accuracy(logits, label)
accuracies.append(acc.numpy())
losses.append(loss.numpy())
avg_acc, avg_loss = np.mean(accuracies), np.mean(losses)
print("[validation] accuracy/loss: {}/{}".format(avg_acc, avg_loss))
model.train()
model = MyNet()
train(model)
.. parsed-literal::
epoch: 0, batch_id: 0, loss is: [0.6926701]
epoch: 0, batch_id: 500, loss is: [0.41248566]
[validation] accuracy/loss: 0.8505121469497681/0.3615057170391083
epoch: 1, batch_id: 0, loss is: [0.29521096]
epoch: 1, batch_id: 500, loss is: [0.2916747]
[validation] accuracy/loss: 0.86475670337677/0.3259459137916565
The End
--------
可以看到,在这个数据集上,经过两轮的迭代可以得到86%左右的准确率。你也可以通过调整网络结构和超参数,来获得更好的效果。
################
自然语言处理
################
在这里PaddlePaddle为大家提供了一篇nlp的教程供大家学习:
- `N-Gram <./n_gram_model/n_gram_model.html>`_ :介绍使用 Paddle 实现N-Gram 模型。
- `文本分类 <./imdb_bow_classification/imdb_bow_classification.html>`_ :介绍使用 Paddle 在IMDB数据集上完成文本分类。
- `文本翻译 <./seq2seq_with_attention/seq2seq_with_attention.html>`_ :介绍使用 Paddle 实现文本翻译。
.. toctree::
:hidden:
:titlesonly:
n_gram_model/n_gram_model.rst
imdb_bow_classification/imdb_bow_classification.rst
seq2seq_with_attention/seq2seq_with_attention.rst
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"# 用N-Gram模型在莎士比亚文集中训练word embedding\n",
"N-gram 是计算机语言学和概率论范畴内的概念,是指给定的一段文本中N个项目的序列。\n",
"N=1 时 N-gram 又称为 unigram,N=2 称为 bigram,N=3 称为 trigram,以此类推。实际应用通常采用 bigram 和 trigram 进行计算。\n",
"本示例在莎士比亚文集上实现了trigram。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 环境\n",
"本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop。"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.0.0'"
]
},
"execution_count": 23,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import paddle\n",
"paddle.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 数据集&&相关参数\n",
"训练数据集采用了莎士比亚文集,[下载](https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt),保存为txt格式即可。<br>\n",
"context_size设为2,意味着是trigram。embedding_dim设为256。"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2020-09-09 14:58:26-- https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt\n",
"正在解析主机 ocw.mit.edu (ocw.mit.edu)... 151.101.110.133\n",
"正在连接 ocw.mit.edu (ocw.mit.edu)|151.101.110.133|:443... 已连接。\n",
"已发出 HTTP 请求,正在等待回应... 200 OK\n",
"长度:5458199 (5.2M) [text/plain]\n",
"正在保存至: “t8.shakespeare.txt”\n",
"\n",
"t8.shakespeare.txt 100%[===================>] 5.21M 94.1KB/s 用时 70s \n",
"\n",
"2020-09-09 14:59:38 (75.7 KB/s) - 已保存 “t8.shakespeare.txt” [5458199/5458199])\n",
"\n"
]
}
],
"source": [
"!wget https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"embedding_dim = 256\n",
"context_size = 2"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Length of text: 5458199 characters\n"
]
}
],
"source": [
"# 文件路径\n",
"path_to_file = './t8.shakespeare.txt'\n",
"test_sentence = open(path_to_file, 'rb').read().decode(encoding='utf-8')\n",
"\n",
"# 文本长度是指文本中的字符个数\n",
"print ('Length of text: {} characters'.format(len(test_sentence)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 去除标点符号\n",
"因为标点符号本身无实际意义,用`string`库中的punctuation,完成英文符号的替换。"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'!': '', '\"': '', '#': '', '$': '', '%': '', '&': '', \"'\": '', '(': '', ')': '', '*': '', '+': '', ',': '', '-': '', '.': '', '/': '', ':': '', ';': '', '<': '', '=': '', '>': '', '?': '', '@': '', '[': '', '\\\\': '', ']': '', '^': '', '_': '', '`': '', '{': '', '|': '', '}': '', '~': ''}\n"
]
}
],
"source": [
"from string import punctuation\n",
"process_dicts={i:'' for i in punctuation}\n",
"print(process_dicts)"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"28343\n"
]
}
],
"source": [
"punc_table = str.maketrans(process_dicts)\n",
"test_sentence = test_sentence.translate(punc_table)\n",
"test_sentence = test_sentence.lower().split()\n",
"vocab = set(test_sentence)\n",
"print(len(vocab))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 数据预处理\n",
"将文本被拆成了元组的形式,格式为(('第一个词', '第二个词'), '第三个词');其中,第三个词就是我们的目标。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[['this', 'is'], 'the'], [['is', 'the'], '100th'], [['the', '100th'], 'etext']]\n"
]
}
],
"source": [
"trigram = [[[test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2]]\n",
" for i in range(len(test_sentence) - 2)]\n",
"\n",
"word_to_idx = {word: i for i, word in enumerate(vocab)}\n",
"idx_to_word = {word_to_idx[word]: word for word in word_to_idx}\n",
"# 看一下数据集\n",
"print(trigram[:3])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 构建`Dataset`类 加载数据\n",
"用`paddle.io.Dataset`构建数据集,然后作为参数传入到`paddle.io.DataLoader`,完成数据集的加载。"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [],
"source": [
"import paddle\n",
"import numpy as np\n",
"batch_size = 256\n",
"paddle.disable_static()\n",
"class TrainDataset(paddle.io.Dataset):\n",
" def __init__(self, tuple_data):\n",
" self.tuple_data = tuple_data\n",
"\n",
" def __getitem__(self, idx):\n",
" data = self.tuple_data[idx][0]\n",
" label = self.tuple_data[idx][1]\n",
" data = np.array(list(map(lambda w: word_to_idx[w], data)))\n",
" label = np.array(word_to_idx[label])\n",
" return data, label\n",
" \n",
" def __len__(self):\n",
" return len(self.tuple_data)\n",
"train_dataset = TrainDataset(trigram)\n",
"train_loader = paddle.io.DataLoader(train_dataset,places=paddle.CPUPlace(), return_list=True,\n",
" shuffle=True, batch_size=batch_size, drop_last=True)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 组网&训练\n",
"这里用paddle动态图的方式组网。为了构建Trigram模型,用一层 `Embedding` 与两层 `Linear` 完成构建。`Embedding` 层对输入的前两个单词embedding,然后输入到后面的两个`Linear`层中,完成特征提取。"
]
},
{
"cell_type": "code",
"execution_count": 13,
"metadata": {},
"outputs": [],
"source": [
"import paddle\n",
"import numpy as np\n",
"hidden_size = 1024\n",
"class NGramModel(paddle.nn.Layer):\n",
" def __init__(self, vocab_size, embedding_dim, context_size):\n",
" super(NGramModel, self).__init__()\n",
" self.embedding = paddle.nn.Embedding(num_embeddings=vocab_size, embedding_dim=embedding_dim)\n",
" self.linear1 = paddle.nn.Linear(context_size * embedding_dim, hidden_size)\n",
" self.linear2 = paddle.nn.Linear(hidden_size, len(vocab))\n",
"\n",
" def forward(self, x):\n",
" x = self.embedding(x)\n",
" x = paddle.reshape(x, [-1, context_size * embedding_dim])\n",
" x = self.linear1(x)\n",
" x = paddle.nn.functional.relu(x)\n",
" x = self.linear2(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 定义`train()`函数,对模型进行训练。"
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch: 0, batch_id: 0, loss is: [10.252193]\n",
"epoch: 0, batch_id: 500, loss is: [6.894636]\n",
"epoch: 0, batch_id: 1000, loss is: [6.849346]\n",
"epoch: 0, batch_id: 1500, loss is: [6.931605]\n",
"epoch: 0, batch_id: 2000, loss is: [6.6860313]\n",
"epoch: 0, batch_id: 2500, loss is: [6.2472367]\n",
"epoch: 0, batch_id: 3000, loss is: [6.8818874]\n",
"epoch: 0, batch_id: 3500, loss is: [6.941615]\n",
"epoch: 1, batch_id: 0, loss is: [6.3628616]\n",
"epoch: 1, batch_id: 500, loss is: [6.2065206]\n",
"epoch: 1, batch_id: 1000, loss is: [6.5334334]\n",
"epoch: 1, batch_id: 1500, loss is: [6.5788]\n",
"epoch: 1, batch_id: 2000, loss is: [6.352103]\n",
"epoch: 1, batch_id: 2500, loss is: [6.6272373]\n",
"epoch: 1, batch_id: 3000, loss is: [6.801074]\n",
"epoch: 1, batch_id: 3500, loss is: [6.2274427]\n"
]
}
],
"source": [
"vocab_size = len(vocab)\n",
"epochs = 2\n",
"losses = []\n",
"def train(model):\n",
" model.train()\n",
" optim = paddle.optimizer.Adam(learning_rate=0.01, parameters=model.parameters())\n",
" for epoch in range(epochs):\n",
" for batch_id, data in enumerate(train_loader()):\n",
" x_data = data[0]\n",
" y_data = data[1]\n",
" predicts = model(x_data)\n",
" y_data = paddle.reshape(y_data, ([-1, 1]))\n",
" loss = paddle.nn.functional.softmax_with_cross_entropy(predicts, y_data)\n",
" avg_loss = paddle.mean(loss)\n",
" avg_loss.backward()\n",
" if batch_id % 500 == 0:\n",
" losses.append(avg_loss.numpy())\n",
" print(\"epoch: {}, batch_id: {}, loss is: {}\".format(epoch, batch_id, avg_loss.numpy())) \n",
" optim.minimize(avg_loss)\n",
" model.clear_gradients()\n",
"model = NGramModel(vocab_size, embedding_dim, context_size)\n",
"train(model)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 打印loss下降曲线\n",
"通过可视化loss的曲线,可以看到模型训练的效果。"
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[<matplotlib.lines.Line2D at 0x14e27b3c8>]"
]
},
"execution_count": 20,
"metadata": {},
"output_type": "execute_result"
},
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAXoAAAD4CAYAAADiry33AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADh0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uMy4yLjEsIGh0dHA6Ly9tYXRwbG90bGliLm9yZy+j8jraAAAgAElEQVR4nO3dd3xV9f3H8dcnBMgAkgCBQIKAgogiIKQoKo46ceHAqq1W/dX5k4qttdUO7c8uOy2Oaqm1rioqLqyjOKqixRH2Hg4gYYWVQPb4/P64NxpiEMjNzb2c+34+Hnnk3nNP7vkw8s7J93zO92vujoiIBFdSrAsQEZHoUtCLiAScgl5EJOAU9CIiAaegFxEJuORYF9BU9+7dvV+/frEuQ0RknzJr1qxN7p7d3GtxF/T9+vWjoKAg1mWIiOxTzGzVrl7T0I2ISMAp6EVEAk5BLyIScAp6EZGAU9CLiAScgl5EJOAU9CIiAReYoC/aVsEfpy9j1eayWJciIhJXAhP0JeU13P3mShatLY11KSIicSUwQZ+blQpA4dbyGFciIhJfAhP0Gant6ZySTOHWiliXIiISVwIT9AB5WWkUKehFRHYSsKBP1Rm9iEgTgQr63MxUCreWowXPRUS+EKigz8tKpay6jpKKmliXIiISNwIW9GkAGr4REWkkYEGvFksRkaZ2G/Rm9qCZbTSzhY22dTWz18xsRfhz1i6+9tLwPivM7NLWLLw5XwS9zuhFRBrsyRn9Q8CpTbbdDLzh7gOBN8LPd2JmXYHbgMOBUcBtu/qB0FoyUtvTqaN66UVEGttt0Lv7O8CWJpvHAQ+HHz8MnN3Ml54CvObuW9x9K/AaX/6B0arMLNx5o6AXEWnQ0jH6nu6+Lvx4PdCzmX1ygTWNnheGt0VVXlYqRdsU9CIiDSK+GOuhpvWIGtfN7CozKzCzguLi4ojqCd00pYuxIiINWhr0G8ysF0D488Zm9ikC+jR6nhfe9iXuPtnd8909Pzs7u4UlheRmpbK9sla99CIiYS0N+mlAQxfNpcALzezzb+BkM8sKX4Q9Obwtqhp66TXnjYhIyJ60Vz4BzAQGmVmhmX0HuAM4ycxWACeGn2Nm+Wb2AIC7bwF+AXwU/rg9vC2q1EsvIrKz5N3t4O4X7eKlE5rZtwC4otHzB4EHW1xdC+RmqpdeRKSxQN0ZC9A1vQOp7dup80ZEJCxwQW9m6rwREWkkcEEPoc4bDd2IiIQEMuh105SIyBcCGvRpbCuvYXuleulFRAIZ9A2dNzqrFxEJaNA39NLrpikRkcAGvVaaEhFpEMig796pAx2TkzR0IyJCQIPezMItluqlFxEJZNBDaPhGQzciIgEO+tzMVF2MFREhwEGfl5XK5rJqyqtrY12KiEhMBTroQS2WIiKBD/pCdd6ISIILcNCrl15EBCIMejObaGYLzWyRmd3QzOvHmVmJmc0Nf9wayfH2RnanjnRol6QWSxFJeLtdYWpXzGwIcCUwCqgGXjWzf7n7yia7znD3MyKosUWSkozemSkaoxeRhBfJGf1g4AN3L3f3WuBt4NzWKat1qJdeRCSyoF8IjDGzbmaWBpwG9Glmv9FmNs/MXjGzQ5p7IzO7yswKzKyguLg4gpJ2lqcFSEREWh707r4E+C0wHXgVmAvUNdltNtDX3YcBdwPP7+K9Jrt7vrvnZ2dnt7SkL8nNTGXTjioqa5qWJSKSOCK6GOvuf3f3ke5+DLAVWN7k9VJ33xF+/DLQ3sy6R3LMvZHXVfPSi4hE2nXTI/x5P0Lj8483eT3HzCz8eFT4eJsjOebeUIuliEgEXTdhz5hZN6AGuM7dt5nZNQDufj8wHrjWzGqBCuBCd/cIj7nHPl9pSkEvIgksoqB39zHNbLu/0eN7gHsiOUYkenZJITnJ1EsvIgktsHfGArRLMnpnqvNGRBJboIMewtMV62KsiCSwwAd9nlaaEpEElwBBn8aG0iqqatVLLyKJKfBBnxuernjdtsoYVyIiEhuBD/rP56XXBVkRSVAJFPQapxeRxBT4oM/pkkK7JFPnjYgkrMAHfXK7JHK6pGjoRkQSVuCDHtRiKSKJLSGCPjcrVfPdiEjCSoigz8tKY31pJdW19bEuRUSkzSVI0KdS77C+RL30IpJ4EiPow9MVF27TOL2IJJ7ECHotQCIiCSzSFaYmmtlCM1tkZjc087qZ2V1mttLM5pvZiEiO11I5GSkkmYJeRBJTi4PezIYAVwKjgGHAGWY2oMluY4GB4Y+rgPtaerxIdEhOomeXFHXeiEhCiuSMfjDwgbuXu3st8DahdWMbGwc84iHvA5lm1iuCY7aYeulFJFFFEvQLgTFm1s3M0oDTgD5N9skF1jR6XhjethMzu8rMCsysoLi4OIKSdi0vK01DNyKSkFoc9O6+BPgtMB14FZgLtGjSd3ef7O757p6fnZ3d0pK+Um5mKutLK6mtUy+9iCSWiC7Guvvf3X2kux8DbAWWN9mliJ3P8vPC29pcXlYqdfXO+lL10otIYom066ZH+PN+hMbnH2+yyzTg2+HumyOAEndfF8kxW0otliKSqJIj/PpnzKwbUANc5+7bzOwaAHe/H3iZ0Nj9SqAcuDzC47VYw0pT6rwRkUQTUdC7+5hmtt3f6LED10VyjNbSOzMF0Bm9iCSehLgzFqBjcjt6dumoFksRSTgJE/QQ6rzRSlMikmgSKujVSy8iiSjBgj6VtdsqqKv3WJciItJmEiroc7NSqa13Nm5XL72IJI6ECnr10otIIkqwoA8vQKLOGxFJIAkV9LmZumlKRBJPQgV9Svt2dO/UUUM3IpJQEirooWFeegW9iCSOhAv63CzdNCUiiSXhgj4vK5WirRXUq5deRBJEAgZ9GtV19RTvqIp1KSIibSLxgj6zocVSwzcikhgSL+jVSy8iCSbhgj43S2f0IpJYIl1K8HtmtsjMFprZE2aW0uT1y8ys2Mzmhj+uiKzcyKV1SKZregd13ohIwmhx0JtZLnA9kO/uQ4B2wIXN7Pqkuw8PfzzQ0uO1JvXSi0giiXToJhlINbNkIA1YG3lJ0RcKeo3Ri0hiaHHQu3sR8AdgNbAOKHH36c3sep6ZzTezqWbWp7n3MrOrzKzAzAqKi4tbWtIey80M9dKHlrQVEQm2SIZusoBxQH+gN5BuZhc32e1FoJ+7DwVeAx5u7r3cfbK757t7fnZ2dktL2mN5WWlU1dazaUd11I8lIhJrkQzdnAh86u7F7l4DPAsc2XgHd9/s7g13Jj0AjIzgeK2mocVSF2RFJBFEEvSrgSPMLM3MDDgBWNJ4BzPr1ejpWU1fj5Vc9dKLSAJJbukXuvsHZjYVmA3UAnOAyWZ2O1Dg7tOA683srPDrW4DLIi85crm6O1ZEEkiLgx7A3W8Dbmuy+dZGr98C3BLJMaKhc0p7MtPaawESEUkICXdnbIPcTLVYikhiSNig101TIpIoEjboczPTKNqmXnoRCb6EDfq8rFTKq+vYWl4T61JERKIqoYMe1GIpIsGXsEHf0EuvzhsRCbqEDfq8rDRAvfQiEnwJG/QZqe3pnJKsoRsRCbyEDXoIz2Kp+W5EJOASOujzstI0dCMigZfgQR+6aUq99CISZAkf9DuqaimtqI11KSIiUZPwQQ+wRhdkRSTAEjzo1WIpIsGX0EHfMC+9Om9EJMgiCnoz+56ZLTKzhWb2hJmlNHm9o5k9aWYrzewDM+sXyfFaW2Zae9I7tFMvvYgEWiSLg+cC1wP57j4EaAdc2GS37wBb3X0AcCfw25YeLxrMTC2WIhJ4kQ7dJAOpZpYMpAFrm7w+Dng4/HgqcEJ4fdm4kZuVqvluRCTQWhz07l4E/IHQIuHrgBJ3n95kt1xgTXj/WqAE6Nb0vczsKjMrMLOC4uLilpbUIqFeeg3diEhwRTJ0k0XojL0/0BtIN7OLW/Je7j7Z3fPdPT87O7ulJbVIXlYqpZW1lFZqXnoRCaZIhm5OBD5192J3rwGeBY5ssk8R0AcgPLyTAWyO4JitLjcz1GKp4RsRCapIgn41cISZpYXH3U8AljTZZxpwafjxeOBNj7P5Br5YgERBLyLBFMkY/QeELrDOBhaE32uymd1uZmeFd/s70M3MVgLfB26OsN5Wp5WmRCTokiP5Yne/DbityeZbG71eCZwfyTGirWt6B1LaJ2noRkQCK6HvjAX10otI8CV80EO4xXKbhm5EJJgU9IRXmtIZvYgElIKe0CyWW8tr2FGleelFJHgU9HzReaOzehEJIgU9ofluAIo0Ti8iAaSgRzdNiUiwKeiB7E4d6ZicpKAXkUBS0BPqpVfnjYgElYI+LFfTFYtIQCnow3R3rIgElYI+LC8rlc1l1VRU18W6FBGRVqWgD8tTi6WIBJSCPqwh6Ndo+EZEAkZBH6aVpkQkqBT0YT06d6R9O9MFWREJnEgWBx9kZnMbfZSa2Q1N9jnOzEoa7XPrrt4v1pKSQr30arEUkaBp8QpT7r4MGA5gZu0ILQT+XDO7znD3M1p6nLaUm5VK0Tad0YtIsLTW0M0JwMfuvqqV3i8m8jLVSy8iwdNaQX8h8MQuXhttZvPM7BUzO6S5HczsKjMrMLOC4uLiVipp7+VlpVK8vYrKGvXSi0hwRBz0ZtYBOAt4upmXZwN93X0YcDfwfHPv4e6T3T3f3fOzs7MjLanFGqYrXqvhGxEJkNY4ox8LzHb3DU1fcPdSd98Rfvwy0N7MurfCMaMiLyvUYqnhGxEJktYI+ovYxbCNmeWYmYUfjwofb3MrHDMqNC+9iARRi7tuAMwsHTgJuLrRtmsA3P1+YDxwrZnVAhXAhe7ukRwzmnp2SSE5yTQNgogESkRB7+5lQLcm2+5v9Pge4J5IjtGW2iUZvTJTdEYvIoGiO2ObUIuliASNgr6J3CytNCUiwaKgbyIvK5UN2yupqlUvvYgEg4K+ibysNNxh3bbKWJciItIqFPRN5GY2LECi4RsRCQYFfRNf9NKrxVJEgkFB30SvjBTaJWleehEJDgV9E8ntksjpkqLOGxEJDAV9M3KzUnVGLyKBoaBvRp4WIBGRAFHQNyMvM5V1JRXU1NXHuhQRkYgp6JuRl5VGvcP6EvXSi8i+T0HfDE1XLCJBoqBvRq566UUkQBT0zeiVkYqZzuhFJBhaHPRmNsjM5jb6KDWzG5rsY2Z2l5mtNLP5ZjYi8pKjr0NyuJdenTciEgAtXnjE3ZcBwwHMrB1QBDzXZLexwMDwx+HAfeHPcS83M1VDNyISCK01dHMC8LG7r2qyfRzwiIe8D2SaWa9WOmZU5emmKREJiNYK+gtpfoHwXGBNo+eF4W07MbOrzKzAzAqKi4tbqaTI5GWlsb6kklr10ovIPi7ioDezDsBZwNMtfQ93n+zu+e6en52dHWlJrSI3K5XaemfD9qpYlyIiEpGIFgcPGwvMdvcNzbxWBPRp9DwvvC3ufd5Lv6X88znq483G0kqKd1RxcK8umFmsy9knVNbU8dnmMj7bVMZnm8v5bFMZG0orueW0wRzYs3OsyxOJitYI+otoftgGYBowwcymELoIW+Lu61rhmFGXl5UGxMcCJO7OhtIqFhSVsKCohIXhj43h3zZG79+N/xt3iIIqrLKmjtVbyvl0U0Ogl/HZpnI+21zGuiZ3O3fv1IHKmnomPD6baROOJqV9uxhVLRI9EQW9maUDJwFXN9p2DYC73w+8DJwGrATKgcsjOV5b6pWRAsAbSzfSNb0DvTJSyclIoUtKclTPnt2dtSWVLCgsYdHaL4J9045qAJIMDsjuxNEDujMkN4N6d+5+cyVjJ83g0tH9uOGkgXRJaR+1+uJFVW0da7aU8+mm0Fn5p+Gz9FWby1lbUoH7F/t2Te9Av25pjD6gG/27pdO3ezr9u6XTr3sanVPa8/byYi598EN+/fISbh83JHZ/KJEoMW/8HREH8vPzvaCgINZlAHD6XTNYtLZ0p21pHdqRk5FCr4wUcrqkhj43PM9IoVdGKllp7ffoh4G7U7i1goXhM/UFRSUsWlvKlrJQqLdLMgb26MSQ3AyG9O7CoXkZDO7VhbQOO/983lJWzR+mL+OJD1fTLb0DPzz1IMaPyCMpKXjDOf9ZtpHbXljEmq3lO4V5Zlp7+nVLp3/3dPqFQzz0OZ2M1N3/4PvlvxbzwLuf8vdL8zlhcM8o/glEosPMZrl7frOvKeh3rbaung3bq1hfUsG6kkrWl1Q2+lzB+pJKNmyvoq5+57/DhhuudvoB0CWFnIxU6ur9i+GXtSVsK68BIDnJOLBnZ4bkduHQ3AyG5IZCfW+GEhYWlXDrCwuZvXobw/tkcvu4Qxial9mqfyextKWsmpP+9DYZae05Y2hv+ofDvH/3dDLTOkT03lW1dZxz739ZX1rJqxPH0KNLSitVLdI2FPRRVFfvbNpRFf4B0MwPhNIKNpRUUd2oTbN9O2NQTufPA31I7wwG5XRulfHh+nrnuTlF/OaVpWwuq+KC/D7cdMogunXqGPF7x9rEKXN4ecE6Xvzu0RyU06XV33/lxu2ccfe7fK1fVx6+fFQgfyOS4PqqoG+Ni7EJrV2S0bNLCj27pECf5s+e3Z0tZdWfXwg8sGdnOiRHZ5qhpCTjvJF5nHxITya9voKH/vsZLy9Yx40nD+Jbh+9Hcrt9c3qj1xdv4IW5a7nhxIFRCXmAAT06c+sZh/Dj5xbw4HufcsWY/aNyHJG2tm9+1+9jzIxunTqGzt5zM6IW8o11TmnPT884mFdvGMPQvExum7aIM+5+lw8+2Rz1Y7e2kooafvL8Ag7K6cz/Hjcgqse6aFQfTjmkJ799dSkLi0qieizZNxVvr+KeN1dw+l0zeHfFpliXs0c0dJMA3J1XF67nly8toWhbBWcN682PTxtMTsa+MQ79o6nzeXrWGp6/7qg2ueawtayasZNmkNaxHf/67tFfuvgticfdmb16K4/MXMXLC9ZRU+d0TkmmY3ISr0w8huzOsR8a/aqhG53RJwAzY+yhvXj9+8dy/QkDeXXRer7+x7f4y1srqaqti3V5X2nGimKeLFjDVccc0GYXlrPSO/Cnbwzj001l/OJfi9vkmBKfKqrrmPLhak6/613Ou28mby7ZyMVH9OWNG4/lmWuPZHtlLTc+PY/6+vg6YW5KZ/QJaPXmcn7x0mJeW7yB/t3TufXMgzl+UI9Yl/UlZVW1nHznO3RMTuLliWPa/Gam3766lPve+pj7Lx7BqUP2ibn4pJV8tqmMR99fxdMFayitrOWgnM5cMrovZw/PJb3jF7/hPfr+Kn72/EJ+ctpgrjwmttd0dDFWdrJftzT+9u183lq2kdtfXMzl//iIEwf34GdnHEzfbumxLu9zv//3MtaWVPD01aNjcsfq9048kPdWbuJHzyxgWJ9MemXE51QY0jrq6p23lm3kkZmreHt5MclJxqlDcvj26H58rV9Ws/fGXHz4fsxYXszv/r2UI/bvxqF5GTGofPd0Rp/gqmvrefC9T7nrjRXU1jtXH7M/1x0/IOZTAXz02RbOv38mlx3Zj5+fdUjM6vh0Uxmn3zWDoXkZ/POKI2inlsvA2VpWzZMFa3js/VUUbq2gR+eOfOvwvlw0qs8e3U/RcE0ntUPomk7jM/62pD562a31JZX85pUlvDB3LWMGdueBS/PpmBybsK+sqeO0STOorqvn3zccE7NvnAZPF6zhpqnz+eGpg6Le9SNtZ37hNh6ZuYpp89ZSXVvP4f278u3R/Tj5kJ6038s25Jkfb+abD7zPeSPy+MP5w6JU8VfT0I3sVk5GCpMuPIyjB3Tnpqnzuf6JOdz7zREx6bu/8/XlfLKpjH9ecXjMQx5g/Mg83lpezJ+mL+fIA7ozfBf3S0j8q6yp46X563jk/VXMW7ONtA7t+EZ+Hpcc0Y9BOS2fFHD0Ad2YcPwA7n5zJcccmM1Zw3q3YtWRi/13kcSV8/P7UFZVy89fXMxNU+fzx/OHtekdovPWbONv73zChV/rw1EDurfZcb+KmfHrsw9l7uptTJwyh5euH0OnOPgBJHtu844q/jbjU54qWMOWsmoOyE7n/846hHNH5NK5lSYBnHjCQN5buYmfPLuAw/pk0qdrWqu8b2tQe6V8yWVH9eemUwbx3Jwibp22kLYa3quuredHz8ynR+cUfnz64DY55p7KSGvPnRcMZ82Wcn4+bVGsy5G98NL8dZx05ztMfudjvtYvi39ecTivf/9YLj2yX6uFPEByuyQmXXgYANdPmUNNHK1Op6CXZv3vcQdwzbEH8Nj7q7nj1aVtEvZ/eWslS9dv51fnDInLqZZH9e/KhK8PZOqsQqbNWxvrcuJKvF3rg9AdrNc+NovrHp9NXlYqr0w8hr9eks9RA7pHbarxPl3T+PW5hzJn9TYmvb4iKsdoCf3+Kc0yM3506iDKqmr569uf0LljMhO+PjBqx1u6vpR73lzJ2cN7x/U0wdd/fQDvrijmJ8/F36/n0VJVW8fG0tDEfQ2ztn4xaV9oMr9NO6o5ekB3bh57EIN7RWcuoj3l7kybt5afT1tEWVUdPzr1IK4c07/NrjedOaw37ywv5t63VnLUgO6MPqBbmxz3q0TUdWNmmcADwBDAgf9x95mNXj8OeAH4NLzpWXe//aveU1038aW+3vnB0/N4dk4Rt515MJcf1b/Vj1FbV8+59/2Xoq0VvPb9Y+maHtmUw9G2Zks5YyfN4KCczky56oh9dqI4CN35ub60mQAvqWR9aWhbw6I3jXXumExOo7UY0jsm8+zsIkorazhvRB7fP+lAesdgCc6N2yv56XMLmb54A8P7ZPL78UMZGIOV18qqajnz7ncpr67jlYljyGqD/9PR7LqZBLzq7uPDi4Q3d3ozw93PiPA4EiNJScbvxg+lvLqO/3txMekdkvnG1/rs/gv3wgPvfsr8whLu/eaIuA95CP16/qtzhjBxylzu/c/HTDwxer/pRMM7y4v5/b+XsWZr+efrITSWmdaenC6hAD80N3OnxXV6ZYRmam1ubPuGEw7kL2+t5B///YwX563l8qP6c+1xB+zRwi+Rcneen1vEz6ctpqKmjh+fdhDfOXr/mN33kN4xmbsuOoxz/vIeP3xmPpMvGRnTdZ1bfEZvZhnAXGB/38WbhM/of7A3Qa8z+vhUVVvHlY/M4t0Vxdx10WGcMbR12sc+Lt7B2EkzOH5QNvdfHNtvhr31vSfn8sLcIp66ejT5/brGupzdqq937nv7Y/4wfRn7d0/nqAHdPw/wnl1Cq6PldEkhtUNk908Ubavgj9OX8dycIjJS2zPh+AFcMrpv1O7L2FBayU+eW8DrSzYysm8Wvxs/lAOyO0XlWHvrgRmf8MuXlvCLs4dwyRF9o3qsqNwwZWbDgcnAYmAYMAuY6O5ljfY5DngGKATWEgr9r2xZUNDHr4rqOi598ENmr97K376dz/EHRTY/Tn29c8HkmSzfsIPXvnfMPreq0/bKGk67awb19fDKDWPi8gJyg+2VNdz41DymL97AuOG9uePcoREH+u4sWlvCHa8sZcaKTeRlpXLTKYM4c2jvVmvXdXeemV3E7S8uoqq2nptOGcTlR/WPq7uX6+udyx76iA8+2cy0CUdH1Ku/O9GavTIZGAHc5+6HAWXAzU32mQ30dfdhwN3A87so8CozKzCzguLi4ghKkmhK7dCOBy7LZ3CvLlzz2CxmfhzZ3PaPvr+Kjz7bys/OOHifC3kIzfk/6cLDWF8aGheOx84TgBUbtjPunvd4c+lGbjvzYP58wfCohzzAIb0zePQ7h/Pod0bRJaU9E6fMZdy97/HfjyOfw31dSQWXP/QRP3h6HoNyOvPqDcdwxZjYDdXsSlKS8cfzh9E5JZnvPjGbyprYzBYbSdAXAoXu/kH4+VRCwf85dy919x3hxy8D7c3sS3fBuPtkd8939/zs7OwISpJo65LSnof/ZxT7dU3jioc/Ys7qrS16nzVbyvntq0s59sBszhuR28pVtp0R+2VxwwkDmTZvLc/NKYp1OV/y0vx1jLv3PUora3n8yiO4/Kj+bT48NmZgNv/67tHcecEwtpRV882/fcDl//iQpetL9/q93J2nPlrDyX96hw8+2cJtZx7Mk1eNpn/3+JmMr6nszh354zeGs3zDDn710pKY1NDioHf39cAaMxsU3nQCoWGcz5lZjoX/V5nZqPDx9r0ljmQnXdM78M8rDqd7545c9o+PWLJu775h3Z1bnl2AAb8+99B9aly+Of97/ABG9evKz55fyKrNZbv/gjZQW1fPr19ewnWPz+agnM68dP3RjOofu+sISUnGOYfl8caNx/KT0wYza9VWxk6awU1Pz2NdScUevUfRtgou/cdH/PCZ+Rzcuwuv3jCGy4/qv0+s7XvsgdlcOaY/j76/iumL1rf58SNtrxxOqL2yA/AJcDlwAYC7329mE4BrgVqgAvi+u//3q95TY/T7jjVbyjn//pnU1tfz1NWj2X8PL4A9+dFqfvTMAn559hAujvIFqrZStK2CsX9+h/7ZnZh6zei9nhSrNW3eUcWEx+cw85PNfHt0X356+sFtsnzl3thWXs1f3vqYh977DDP4ztH9uea4A5q9zuHuTPloDb96aQn17tw89iAuPrzvPhHwjVXX1nPufe9RuLWCVyaOafVprzV7pUTNyo07uOCvM+mYnMRT14wmL+urbyBaX1LJSXe+zSG9u/D4FUfsc9+sX+Wl+eu47vHZTDh+AD84ZdDuvyAK5q3ZxrWPzWJzWTW/PudQzhuZF5M69tSaLeX86bXlPDeniKy09nz36wO5+Ii+n/9gKtxazs3PLODdlZsYvX83fjd+6D59k9onxTs44+53ozLttYJeomrx2lIunDyTrukdeOqa0fTo3PyFVXfnykcKeHflJv59wzFxtchJa/nh1Hk8PauQP18wnNMP7dWmN1M98eFqbnthET26dOT+i0cyJDc+F8FozsKiUIfOuys3sV/XNH5wyiBKK2r4zcuhMe1bThvMN0ftF4gTg4Zpr286ZRDXHd96014r6CXqZq3ayiV//4A+WWk8efURZKZ9+canF+YWMXHKXH56+mCuGBPbZdeipayqlnP+8h7LN+ygR+eOnDMil/NH5jGgR/Ta6ipr6vj5tEVM+WgNxxyYzaQLhrfJnZjR8M7yYjJIGyYAAAdTSURBVH7zytLPr/scPaA7d5x36G5/U9yXuDvXT5nLywvW8dTVoxnZN6tV3ldBL23ivZWbuPyhjxic05nHrjh8p7snN+2o4qQ/vU2/7ulMvebIuGuDa03VtfW8uXQjU2cV8p9lG6mrd4b3yWT8yDzOHNa7Ve8UXbutgmsfm8W8whImHD+A75104D7/d1tf77w4fy317pw9PHefv1jfnNLKGk6bNAOAlye2zj0YCnppM68v3sA1j81iRN8sHr581Of92hMen830RRt46fqjYzL3SKwUb6/ihblFPF1QyLIN2+mQnMQph+QwfmQeRw/oHlEo/3flJiY8MYea2nr++I1hnHxITitWLtE2a9VWvvHXmZx2aC/uunB4xD/QFPTSpqbNW8vEKXM49sBsJl+Sz3+WbeTqR2fxg5MPjOoMmPHM3VlYVMrUWWt4Yd5atpXXkNMlhXNH5DJ+ZN4edyw1vNffZnzCHa8s5YDsTvz1kpF79fUSP+55cwV/mL6c348fyvn5kc0hpaCXNjflw9Xc/OwCTj64J3PWbCO7U0demHBUTNsO40VVbR1vLAkN7by1bCP1DiP7ZjF+ZB6nD+31lb/G76iq5UdT5/PSgnWcfmgvfjd+aFwstygtU1fvfOuB95lfWMK/vnt0RD+wFfQSEw0TOrVLMl647qh9qgukrWwsreS5OUU8PauQlRt3kNI+iVMPyWH8yD4ceUC3nbpMPi7ewdWPzuKT4h3cMnYwV4xp+7tcpfWtK6lg7KQZ5GWl8uy1R7X4ngcFvcTMlA9X0yE5iXNHxHc/d6y5O/MKS5g6aw3T5q6ltLKW3hkpnDcyj/NG5LFsw3ZufGoeHZKTuOeiwzgyTtbTldYxfdF6rnp0FleO6c9PTj+4Re+hoBfZh1TW1PH6kg08XVDIjBXF1Ie/RYf1yeS+b42IyYIeEn2/emkxA3p04oKv7deir1fQi+yj1peEhnbq6uu58pj9ozanu+z7ornClIhEUU5GCtced0Csy5B9nFogREQCTkEvIhJwCnoRkYBT0IuIBJyCXkQk4BT0IiIBp6AXEQk4Bb2ISMDF3Z2xZlYMrIrgLboDm1qpnGiI9/og/muM9/og/muM9/pANe6tvu6e3dwLcRf0kTKzgl3dBhwP4r0+iP8a470+iP8a470+UI2tSUM3IiIBp6AXEQm4IAb95FgXsBvxXh/Ef43xXh/Ef43xXh+oxlYTuDF6ERHZWRDP6EVEpBEFvYhIwAUm6M3sVDNbZmYrzezmWNfTlJn1MbP/mNliM1tkZhNjXVNzzKydmc0xs3/FupbmmFmmmU01s6VmtsTMRse6psbM7Hvhf9+FZvaEmaXEQU0PmtlGM1vYaFtXM3vNzFaEP2fFYY2/D/87zzez58wsM57qa/TajWbmZha3C/kGIujNrB1wLzAWOBi4yMxatsJu9NQCN7r7wcARwHVxWCPARGBJrIv4CpOAV939IGAYcVSrmeUC1wP57j4EaAdcGNuqAHgIOLXJtpuBN9x9IPBG+HksPcSXa3wNGOLuQ4HlwC1tXVQjD/Hl+jCzPsDJwOq2LmhvBCLogVHASnf/xN2rgSnAuBjXtBN3X+fus8OPtxMKqNzYVrUzM8sDTgceiHUtzTGzDOAY4O8A7l7t7ttiW9WXJAOpZpYMpAFrY1wP7v4OsKXJ5nHAw+HHDwNnt2lRTTRXo7tPd/fa8NP3gbw2L+yLWpr7OwS4E/ghENddLUEJ+lxgTaPnhcRZiDZmZv2Aw4APYlvJl/yZ0H/a+lgXsgv9gWLgH+HhpQfMLD3WRTVw9yLgD4TO7tYBJe4+PbZV7VJPd18Xfrwe6BnLYvbA/wCvxLqIxsxsHFDk7vNiXcvuBCXo9xlm1gl4BrjB3UtjXU8DMzsD2Ojus2Jdy1dIBkYA97n7YUAZsR9y+Fx4nHscoR9IvYF0M7s4tlXtnod6rOP2jNTMfkJo6POfsa6lgZmlAT8Gbo11LXsiKEFfBPRp9DwvvC2umFl7QiH/T3d/Ntb1NHEUcJaZfUZo6OvrZvZYbEv6kkKg0N0bfhOaSij448WJwKfuXuzuNcCzwJExrmlXNphZL4Dw540xrqdZZnYZcAbwLY+vm34OIPQDfV74eyYPmG1mOTGtaheCEvQfAQPNrL+ZdSB0AWxajGvaiZkZobHlJe7+p1jX05S73+Luee7ej9Df35vuHldno+6+HlhjZoPCm04AFsewpKZWA0eYWVr43/sE4uhicRPTgEvDjy8FXohhLc0ys1MJDSWe5e7lsa6nMXdf4O493L1f+HumEBgR/j8adwIR9OELNhOAfxP6xnrK3RfFtqovOQq4hNCZ8tzwx2mxLmof9F3gn2Y2HxgO/DrG9Xwu/JvGVGA2sIDQ91fMb5E3syeAmcAgMys0s+8AdwAnmdkKQr+J3BGHNd4DdAZeC3+/3B9n9e0zNAWCiEjABeKMXkREdk1BLyIScAp6EZGAU9CLiAScgl5EJOAU9CIiAaegFxEJuP8HvHiKw1jJ554AAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
},
"metadata": {
"needs_background": "light"
},
"output_type": "display_data"
}
],
"source": [
"import matplotlib.pyplot as plt\n",
"import matplotlib.ticker as ticker\n",
"%matplotlib inline\n",
"\n",
"plt.figure()\n",
"plt.plot(losses)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 预测\n",
"用训练好的模型进行预测。"
]
},
{
"cell_type": "code",
"execution_count": 22,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"the input words is: of, william\n",
"the predict words is: shakespeare\n",
"the true words is: shakespeare\n"
]
}
],
"source": [
"import random\n",
"def test(model):\n",
" model.eval()\n",
" # 从最后10组数据中随机选取1个\n",
" idx = random.randint(len(trigram)-10, len(trigram)-1)\n",
" print('the input words is: ' + trigram[idx][0][0] + ', ' + trigram[idx][0][1])\n",
" x_data = list(map(lambda w: word_to_idx[w], trigram[idx][0]))\n",
" x_data = paddle.to_tensor(np.array(x_data))\n",
" predicts = model(x_data)\n",
" predicts = predicts.numpy().tolist()[0]\n",
" predicts = predicts.index(max(predicts))\n",
" print('the predict words is: ' + idx_to_word[predicts])\n",
" y_data = trigram[idx][1]\n",
" print('the true words is: ' + y_data)\n",
"test(model)"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.3"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
N-Gram模型在莎士比亚文集中训练word embedding
==============================================
N-gram
是计算机语言学和概率论范畴内的概念,是指给定的一段文本中N个项目的序列。
N=1 N-gram 又称为 unigramN=2 称为 bigramN=3 称为
trigram,以此类推。实际应用通常采用 bigram trigram 进行计算。
本示例在莎士比亚文集上实现了trigram
环境
----
本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop
.. code:: ipython3
import paddle
paddle.__version__
.. parsed-literal::
'0.0.0'
数据集&&相关参数
----------------
训练数据集采用了莎士比亚文集,\ `下载 <https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt>`__\ ,保存为txt格式即可。
context_size设为2,意味着是trigramembedding_dim设为256
.. code:: ipython3
!wget https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
.. parsed-literal::
--2020-09-09 14:58:26-- https://ocw.mit.edu/ans7870/6/6.006/s08/lecturenotes/files/t8.shakespeare.txt
正在解析主机 ocw.mit.edu (ocw.mit.edu)... 151.101.110.133
正在连接 ocw.mit.edu (ocw.mit.edu)|151.101.110.133|:443... 已连接。
已发出 HTTP 请求,正在等待回应... 200 OK
长度:5458199 (5.2M) [text/plain]
正在保存至: t8.shakespeare.txt
t8.shakespeare.txt 100%[===================>] 5.21M 94.1KB/s 用时 70s
2020-09-09 14:59:38 (75.7 KB/s) - 已保存 t8.shakespeare.txt [5458199/5458199])
.. code:: ipython3
embedding_dim = 256
context_size = 2
.. code:: ipython3
# 文件路径
path_to_file = './t8.shakespeare.txt'
test_sentence = open(path_to_file, 'rb').read().decode(encoding='utf-8')
# 文本长度是指文本中的字符个数
print ('Length of text: {} characters'.format(len(test_sentence)))
.. parsed-literal::
Length of text: 5458199 characters
去除标点符号
------------
因为标点符号本身无实际意义,用\ ``string``\ 库中的punctuation,完成英文符号的替换。
.. code:: ipython3
from string import punctuation
process_dicts={i:'' for i in punctuation}
print(process_dicts)
.. parsed-literal::
{'!': '', '"': '', '#': '', '$': '', '%': '', '&': '', "'": '', '(': '', ')': '', '*': '', '+': '', ',': '', '-': '', '.': '', '/': '', ':': '', ';': '', '<': '', '=': '', '>': '', '?': '', '@': '', '[': '', '\\': '', ']': '', '^': '', '_': '', '`': '', '{': '', '|': '', '}': '', '~': ''}
.. code:: ipython3
punc_table = str.maketrans(process_dicts)
test_sentence = test_sentence.translate(punc_table)
test_sentence = test_sentence.lower().split()
vocab = set(test_sentence)
print(len(vocab))
.. parsed-literal::
28343
数据预处理
----------
将文本被拆成了元组的形式,格式为((‘第一个词’, ‘第二个词’),
‘第三个词’);其中,第三个词就是我们的目标。
.. code:: ipython3
trigram = [[[test_sentence[i], test_sentence[i + 1]], test_sentence[i + 2]]
for i in range(len(test_sentence) - 2)]
word_to_idx = {word: i for i, word in enumerate(vocab)}
idx_to_word = {word_to_idx[word]: word for word in word_to_idx}
# 看一下数据集
print(trigram[:3])
.. parsed-literal::
[[['this', 'is'], 'the'], [['is', 'the'], '100th'], [['the', '100th'], 'etext']]
构建\ ``Dataset``\ 加载数据
------------------------------
\ ``paddle.io.Dataset``\ 构建数据集,然后作为参数传入到\ ``paddle.io.DataLoader``\ ,完成数据集的加载。
.. code:: ipython3
import paddle
import numpy as np
batch_size = 256
paddle.disable_static()
class TrainDataset(paddle.io.Dataset):
def __init__(self, tuple_data):
self.tuple_data = tuple_data
def __getitem__(self, idx):
data = self.tuple_data[idx][0]
label = self.tuple_data[idx][1]
data = np.array(list(map(lambda w: word_to_idx[w], data)))
label = np.array(word_to_idx[label])
return data, label
def __len__(self):
return len(self.tuple_data)
train_dataset = TrainDataset(trigram)
train_loader = paddle.io.DataLoader(train_dataset,places=paddle.CPUPlace(), return_list=True,
shuffle=True, batch_size=batch_size, drop_last=True)
组网&训练
---------
这里用paddle动态图的方式组网。为了构建Trigram模型,用一层 ``Embedding``
与两层 ``Linear`` 完成构建。\ ``Embedding``
层对输入的前两个单词embedding,然后输入到后面的两个\ ``Linear``\ 层中,完成特征提取。
.. code:: ipython3
import paddle
import numpy as np
hidden_size = 1024
class NGramModel(paddle.nn.Layer):
def __init__(self, vocab_size, embedding_dim, context_size):
super(NGramModel, self).__init__()
self.embedding = paddle.nn.Embedding(num_embeddings=vocab_size, embedding_dim=embedding_dim)
self.linear1 = paddle.nn.Linear(context_size * embedding_dim, hidden_size)
self.linear2 = paddle.nn.Linear(hidden_size, len(vocab))
def forward(self, x):
x = self.embedding(x)
x = paddle.reshape(x, [-1, context_size * embedding_dim])
x = self.linear1(x)
x = paddle.nn.functional.relu(x)
x = self.linear2(x)
return x
定义\ ``train()``\ 函数,对模型进行训练。
-----------------------------------------
.. code:: ipython3
vocab_size = len(vocab)
epochs = 2
losses = []
def train(model):
model.train()
optim = paddle.optimizer.Adam(learning_rate=0.01, parameters=model.parameters())
for epoch in range(epochs):
for batch_id, data in enumerate(train_loader()):
x_data = data[0]
y_data = data[1]
predicts = model(x_data)
y_data = paddle.reshape(y_data, ([-1, 1]))
loss = paddle.nn.functional.softmax_with_cross_entropy(predicts, y_data)
avg_loss = paddle.mean(loss)
avg_loss.backward()
if batch_id % 500 == 0:
losses.append(avg_loss.numpy())
print("epoch: {}, batch_id: {}, loss is: {}".format(epoch, batch_id, avg_loss.numpy()))
optim.minimize(avg_loss)
model.clear_gradients()
model = NGramModel(vocab_size, embedding_dim, context_size)
train(model)
.. parsed-literal::
epoch: 0, batch_id: 0, loss is: [10.252193]
epoch: 0, batch_id: 500, loss is: [6.894636]
epoch: 0, batch_id: 1000, loss is: [6.849346]
epoch: 0, batch_id: 1500, loss is: [6.931605]
epoch: 0, batch_id: 2000, loss is: [6.6860313]
epoch: 0, batch_id: 2500, loss is: [6.2472367]
epoch: 0, batch_id: 3000, loss is: [6.8818874]
epoch: 0, batch_id: 3500, loss is: [6.941615]
epoch: 1, batch_id: 0, loss is: [6.3628616]
epoch: 1, batch_id: 500, loss is: [6.2065206]
epoch: 1, batch_id: 1000, loss is: [6.5334334]
epoch: 1, batch_id: 1500, loss is: [6.5788]
epoch: 1, batch_id: 2000, loss is: [6.352103]
epoch: 1, batch_id: 2500, loss is: [6.6272373]
epoch: 1, batch_id: 3000, loss is: [6.801074]
epoch: 1, batch_id: 3500, loss is: [6.2274427]
打印loss下降曲线
----------------
通过可视化loss的曲线,可以看到模型训练的效果。
.. code:: ipython3
import matplotlib.pyplot as plt
import matplotlib.ticker as ticker
%matplotlib inline
plt.figure()
plt.plot(losses)
.. parsed-literal::
[<matplotlib.lines.Line2D at 0x14e27b3c8>]
.. image:: n_gram_model_files/n_gram_model_19_1.png
预测
----
用训练好的模型进行预测。
.. code:: ipython3
import random
def test(model):
model.eval()
# 从最后10组数据中随机选取1
idx = random.randint(len(trigram)-10, len(trigram)-1)
print('the input words is: ' + trigram[idx][0][0] + ', ' + trigram[idx][0][1])
x_data = list(map(lambda w: word_to_idx[w], trigram[idx][0]))
x_data = paddle.to_tensor(np.array(x_data))
predicts = model(x_data)
predicts = predicts.numpy().tolist()[0]
predicts = predicts.index(max(predicts))
print('the predict words is: ' + idx_to_word[predicts])
y_data = trigram[idx][1]
print('the true words is: ' + y_data)
test(model)
.. parsed-literal::
the input words is: of, william
the predict words is: shakespeare
the true words is: shakespeare
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用注意力机制的LSTM的机器翻译\n",
"\n",
"本示例教程介绍如何使用飞桨完成一个机器翻译任务。我们将会使用飞桨提供的LSTM的API,组建一个`sequence to sequence with attention`的机器翻译的模型,并在示例的数据集上完成从英文翻译成中文的机器翻译。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 环境设置\n",
"\n",
"本示例教程基于飞桨2.0-beta版本。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"89af2088b6e74bdfeef2d4d78e08461ed2aafee5\n"
]
}
],
"source": [
"import paddle\n",
"import paddle.nn.functional as F\n",
"import re\n",
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 下载数据集\n",
"\n",
"我们将使用 [http://www.manythings.org/anki/](http://www.manythings.org/anki/) 提供的中英文的英汉句对作为数据集,来完成本任务。该数据集含有23610个中英文双语的句对。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"--2020-09-04 16:13:35-- https://www.manythings.org/anki/cmn-eng.zip\n",
"Resolving www.manythings.org (www.manythings.org)... 104.24.109.196, 172.67.173.198, 2606:4700:3037::6818:6cc4, ...\n",
"Connecting to www.manythings.org (www.manythings.org)|104.24.109.196|:443... connected.\n",
"HTTP request sent, awaiting response... 200 OK\n",
"Length: 1030722 (1007K) [application/zip]\n",
"Saving to: ‘cmn-eng.zip’\n",
"\n",
"cmn-eng.zip 100%[===================>] 1007K 520KB/s in 1.9s \n",
"\n",
"2020-09-04 16:13:38 (520 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]\n",
"\n",
"Archive: cmn-eng.zip\n",
" inflating: cmn.txt \n",
" inflating: _about.txt \n"
]
}
],
"source": [
"!wget -c https://www.manythings.org/anki/cmn-eng.zip && unzip cmn-eng.zip"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" 23610 cmn.txt\r\n"
]
}
],
"source": [
"!wc -l cmn.txt"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建双语句对的数据结构\n",
"\n",
"接下来我们通过处理下载下来的双语句对的文本文件,将双语句对读入到python的数据结构中。这里做了如下的处理。\n",
"\n",
"- 对于英文,会把全部英文都变成小写,并只保留英文的单词。\n",
"- 对于中文,为了简便起见,未做分词,按照字做了切分。\n",
"- 为了后续的程序运行的更快,我们通过限制句子长度,和只保留部分英文单词开头的句子的方式,得到了一个较小的数据集。这样得到了一个有5508个句对的数据集。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [],
"source": [
"MAX_LEN = 10"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"5508\n",
"(['i', 'won'], ['我', '赢', '了', '。'])\n",
"(['he', 'ran'], ['他', '跑', '了', '。'])\n",
"(['i', 'quit'], ['我', '退', '出', '。'])\n",
"(['i', 'm', 'ok'], ['我', '沒', '事', '。'])\n",
"(['i', 'm', 'up'], ['我', '已', '经', '起', '来', '了', '。'])\n",
"(['we', 'try'], ['我', '们', '来', '试', '试', '。'])\n",
"(['he', 'came'], ['他', '来', '了', '。'])\n",
"(['he', 'runs'], ['他', '跑', '。'])\n",
"(['i', 'agree'], ['我', '同', '意', '。'])\n",
"(['i', 'm', 'ill'], ['我', '生', '病', '了', '。'])\n"
]
}
],
"source": [
"lines = open('cmn.txt', encoding='utf-8').read().strip().split('\\n')\n",
"words_re = re.compile(r'\\w+')\n",
"\n",
"pairs = []\n",
"for l in lines:\n",
" en_sent, cn_sent, _ = l.split('\\t')\n",
" pairs.append((words_re.findall(en_sent.lower()), list(cn_sent)))\n",
"\n",
"# create a smaller dataset to make the demo process faster\n",
"filtered_pairs = []\n",
"\n",
"for x in pairs:\n",
" if len(x[0]) < MAX_LEN and len(x[1]) < MAX_LEN and \\\n",
" x[0][0] in ('i', 'you', 'he', 'she', 'we', 'they'):\n",
" filtered_pairs.append(x)\n",
" \n",
"print(len(filtered_pairs))\n",
"for x in filtered_pairs[:10]: print(x) "
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建词表\n",
"\n",
"接下来我们分别创建中英文的词表,这两份词表会用来将英文和中文的句子转换为词的ID构成的序列。词表中还加入了如下三个特殊的词:\n",
"- `<pad>`: 用来对较短的句子进行填充。\n",
"- `<bos>`: \"begin of sentence\", 表示句子的开始的特殊词。\n",
"- `<eos>`: \"end of sentence\", 表示句子的结束的特殊词。\n",
"\n",
"Note: 在实际的任务中,可能还需要通过`<unk>`(或者`<oov>`)特殊词来表示未在词表中出现的词。"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"2539\n",
"2039\n"
]
}
],
"source": [
"en_vocab = {}\n",
"cn_vocab = {}\n",
"\n",
"# create special token for pad, begin of sentence, end of sentence\n",
"en_vocab['<pad>'], en_vocab['<bos>'], en_vocab['<eos>'] = 0, 1, 2\n",
"cn_vocab['<pad>'], cn_vocab['<bos>'], cn_vocab['<eos>'] = 0, 1, 2\n",
"\n",
"en_idx, cn_idx = 3, 3\n",
"for en, cn in filtered_pairs:\n",
" for w in en: \n",
" if w not in en_vocab: \n",
" en_vocab[w] = en_idx\n",
" en_idx += 1\n",
" for w in cn: \n",
" if w not in cn_vocab: \n",
" cn_vocab[w] = cn_idx\n",
" cn_idx += 1\n",
"\n",
"print(len(list(en_vocab)))\n",
"print(len(list(cn_vocab)))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建padding过的数据集\n",
"\n",
"接下来根据词表,我们将会创建一份实际的用于训练的用numpy array组织起来的数据集。\n",
"- 所有的句子都通过`<pad>`补充成为了长度相同的句子。\n",
"- 对于英文句子(源语言),我们将其反转了过来,这会带来更好的翻译的效果。\n",
"- 所创建的`padded_cn_label_sents`是训练过程中的预测的目标,即,每个中文的当前词去预测下一个词是什么词。\n"
]
},
{
"cell_type": "code",
"execution_count": 6,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"(5508, 11)\n",
"(5508, 12)\n",
"(5508, 12)\n"
]
}
],
"source": [
"padded_en_sents = []\n",
"padded_cn_sents = []\n",
"padded_cn_label_sents = []\n",
"for en, cn in filtered_pairs:\n",
" # reverse source sentence\n",
" padded_en_sent = en + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(en))\n",
" padded_en_sent.reverse()\n",
" padded_cn_sent = ['<bos>'] + cn + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(cn))\n",
" padded_cn_label_sent = cn + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(cn) + 1) \n",
"\n",
" padded_en_sents.append([en_vocab[w] for w in padded_en_sent])\n",
" padded_cn_sents.append([cn_vocab[w] for w in padded_cn_sent])\n",
" padded_cn_label_sents.append([cn_vocab[w] for w in padded_cn_label_sent])\n",
"\n",
"train_en_sents = np.array(padded_en_sents)\n",
"train_cn_sents = np.array(padded_cn_sents)\n",
"train_cn_label_sents = np.array(padded_cn_label_sents)\n",
"\n",
"print(train_en_sents.shape)\n",
"print(train_cn_sents.shape)\n",
"print(train_cn_label_sents.shape)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 创建网络\n",
"\n",
"我们将会创建一个Encoder-AttentionDecoder架构的模型结构用来完成机器翻译任务。\n",
"首先我们将设置一些必要的网络结构中用到的参数。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [],
"source": [
"embedding_size = 128\n",
"hidden_size = 256\n",
"num_encoder_lstm_layers = 1\n",
"en_vocab_size = len(list(en_vocab))\n",
"cn_vocab_size = len(list(cn_vocab))\n",
"epochs = 20\n",
"batch_size = 16"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Encoder部分\n",
"\n",
"在编码器的部分,我们通过查找完Embedding之后接一个LSTM的方式构建一个对源语言编码的网络。飞桨的RNN系列的API,除了LSTM之外,还提供了SimleRNN, GRU供使用,同时,还可以使用反向RNN,双向RNN,多层RNN等形式。也可以通过`dropout`参数设置是否对多层RNN的中间层进行`dropout`处理,来防止过拟合。\n",
"\n",
"除了使用序列到序列的RNN操作之外,也可以通过SimpleRNN, GRUCell, LSTMCell等API更灵活的创建单步的RNN计算,甚至通过继承RNNCellBase来实现自己的RNN计算单元。"
]
},
{
"cell_type": "code",
"execution_count": 8,
"metadata": {},
"outputs": [],
"source": [
"# encoder: simply learn representation of source sentence\n",
"class Encoder(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(Encoder, self).__init__()\n",
" self.emb = paddle.nn.Embedding(en_vocab_size, embedding_size,)\n",
" self.lstm = paddle.nn.LSTM(input_size=embedding_size, \n",
" hidden_size=hidden_size, \n",
" num_layers=num_encoder_lstm_layers)\n",
"\n",
" def forward(self, x):\n",
" x = self.emb(x)\n",
" x, (_, _) = self.lstm(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# AttentionDecoder部分\n",
"\n",
"在解码器部分,我们通过一个带有注意力机制的LSTM来完成解码。\n",
"\n",
"- 单步的LSTM:在解码器的实现的部分,我们同样使用LSTM,与Encoder部分不同的是,下面的代码,每次只让LSTM往前计算一次。整体的recurrent部分,是在训练循环内完成的。\n",
"- 注意力机制:这里使用了一个由两个Linear组成的网络来完成注意力机制的计算,它用来计算出目标语言在每次翻译一个词的时候,需要对源语言当中的每个词需要赋予多少的权重。\n",
"- 对于第一次接触这样的网络结构来说,下面的代码在理解起来可能稍微有些复杂,你可以通过插入打印每个tensor在不同步骤时的形状的方式来更好的理解。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [],
"source": [
"# only move one step of LSTM, \n",
"# the recurrent loop is implemented inside training loop\n",
"class AttentionDecoder(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(AttentionDecoder, self).__init__()\n",
" self.emb = paddle.nn.Embedding(cn_vocab_size, embedding_size)\n",
" self.lstm = paddle.nn.LSTM(input_size=embedding_size + hidden_size, \n",
" hidden_size=hidden_size)\n",
"\n",
" # for computing attention weights\n",
" self.attention_linear1 = paddle.nn.Linear(hidden_size * 2, hidden_size)\n",
" self.attention_linear2 = paddle.nn.Linear(hidden_size, 1)\n",
" \n",
" # for computing output logits\n",
" self.outlinear =paddle.nn.Linear(hidden_size, cn_vocab_size)\n",
"\n",
" def forward(self, x, previous_hidden, previous_cell, encoder_outputs):\n",
" x = self.emb(x)\n",
" \n",
" attention_inputs = paddle.concat((encoder_outputs, \n",
" paddle.tile(previous_hidden, repeat_times=[1, MAX_LEN+1, 1])),\n",
" axis=-1\n",
" )\n",
"\n",
" attention_hidden = self.attention_linear1(attention_inputs)\n",
" attention_hidden = F.tanh(attention_hidden)\n",
" attention_logits = self.attention_linear2(attention_hidden)\n",
" attention_logits = paddle.squeeze(attention_logits)\n",
"\n",
" attention_weights = F.softmax(attention_logits) \n",
" attention_weights = paddle.expand_as(paddle.unsqueeze(attention_weights, -1), \n",
" encoder_outputs)\n",
"\n",
" context_vector = paddle.multiply(encoder_outputs, attention_weights) \n",
" context_vector = paddle.reduce_sum(context_vector, 1)\n",
" context_vector = paddle.unsqueeze(context_vector, 1)\n",
" \n",
" lstm_input = paddle.concat((x, context_vector), axis=-1)\n",
"\n",
" # LSTM requirement to previous hidden/state: \n",
" # (number_of_layers * direction, batch, hidden)\n",
" previous_hidden = paddle.transpose(previous_hidden, [1, 0, 2])\n",
" previous_cell = paddle.transpose(previous_cell, [1, 0, 2])\n",
" \n",
" x, (hidden, cell) = self.lstm(lstm_input, (previous_hidden, previous_cell))\n",
" \n",
" # change the return to (batch, number_of_layers * direction, hidden)\n",
" hidden = paddle.transpose(hidden, [1, 0, 2])\n",
" cell = paddle.transpose(cell, [1, 0, 2])\n",
"\n",
" output = self.outlinear(hidden)\n",
" output = paddle.squeeze(output)\n",
" return output, (hidden, cell)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 训练模型\n",
"\n",
"接下来我们开始训练模型。\n",
"\n",
"- 在每个epoch开始之前,我们对训练数据进行了随机打乱。\n",
"- 我们通过多次调用`atten_decoder`,在这里实现了解码时的recurrent循环。\n",
"- `teacher forcing`策略: 在每次解码下一个词时,我们给定了训练数据当中的真实词作为了预测下一个词时的输入。相应的,你也可以尝试用模型预测的结果作为下一个词的输入。(或者混合使用)"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch:0\n",
"iter 0, loss:[7.6194725]\n",
"iter 200, loss:[3.4147663]\n",
"epoch:1\n",
"iter 0, loss:[3.0931656]\n",
"iter 200, loss:[2.7543137]\n",
"epoch:2\n",
"iter 0, loss:[2.8413522]\n",
"iter 200, loss:[2.340513]\n",
"epoch:3\n",
"iter 0, loss:[2.597812]\n",
"iter 200, loss:[2.5552855]\n",
"epoch:4\n",
"iter 0, loss:[2.0783448]\n",
"iter 200, loss:[2.4544785]\n",
"epoch:5\n",
"iter 0, loss:[1.8709135]\n",
"iter 200, loss:[1.8736631]\n",
"epoch:6\n",
"iter 0, loss:[1.9589291]\n",
"iter 200, loss:[2.119414]\n",
"epoch:7\n",
"iter 0, loss:[1.5829577]\n",
"iter 200, loss:[1.6002902]\n",
"epoch:8\n",
"iter 0, loss:[1.6022769]\n",
"iter 200, loss:[1.52694]\n",
"epoch:9\n",
"iter 0, loss:[1.3616685]\n",
"iter 200, loss:[1.5420443]\n",
"epoch:10\n",
"iter 0, loss:[1.0397792]\n",
"iter 200, loss:[1.2458231]\n",
"epoch:11\n",
"iter 0, loss:[1.2107158]\n",
"iter 200, loss:[1.426417]\n",
"epoch:12\n",
"iter 0, loss:[1.1840894]\n",
"iter 200, loss:[1.0999664]\n",
"epoch:13\n",
"iter 0, loss:[1.0968472]\n",
"iter 200, loss:[0.8149167]\n",
"epoch:14\n",
"iter 0, loss:[0.95585203]\n",
"iter 200, loss:[1.0070628]\n",
"epoch:15\n",
"iter 0, loss:[0.89463925]\n",
"iter 200, loss:[0.8288595]\n",
"epoch:16\n",
"iter 0, loss:[0.5672495]\n",
"iter 200, loss:[0.7317069]\n",
"epoch:17\n",
"iter 0, loss:[0.76785177]\n",
"iter 200, loss:[0.5319323]\n",
"epoch:18\n",
"iter 0, loss:[0.5250005]\n",
"iter 200, loss:[0.4182841]\n",
"epoch:19\n",
"iter 0, loss:[0.52320284]\n",
"iter 200, loss:[0.47618982]\n"
]
}
],
"source": [
"encoder = Encoder()\n",
"atten_decoder = AttentionDecoder()\n",
"\n",
"opt = paddle.optimizer.Adam(learning_rate=0.001, \n",
" parameters=encoder.parameters()+atten_decoder.parameters())\n",
"\n",
"for epoch in range(epochs):\n",
" print(\"epoch:{}\".format(epoch))\n",
"\n",
" # shuffle training data\n",
" perm = np.random.permutation(len(train_en_sents))\n",
" train_en_sents_shuffled = train_en_sents[perm]\n",
" train_cn_sents_shuffled = train_cn_sents[perm]\n",
" train_cn_label_sents_shuffled = train_cn_label_sents[perm]\n",
"\n",
" for iteration in range(train_en_sents_shuffled.shape[0] // batch_size):\n",
" x_data = train_en_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]\n",
" sent = paddle.to_tensor(x_data)\n",
" en_repr = encoder(sent)\n",
"\n",
" x_cn_data = train_cn_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]\n",
" x_cn_label_data = train_cn_label_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]\n",
"\n",
" # shape: (batch, num_layer(=1 here) * num_of_direction(=1 here), hidden_size)\n",
" hidden = paddle.zeros([batch_size, 1, hidden_size])\n",
" cell = paddle.zeros([batch_size, 1, hidden_size])\n",
"\n",
" loss = paddle.zeros([1])\n",
" # the decoder recurrent loop mentioned above\n",
" for i in range(MAX_LEN + 2):\n",
" cn_word = paddle.to_tensor(x_cn_data[:,i:i+1])\n",
" cn_word_label = paddle.to_tensor(x_cn_label_data[:,i:i+1])\n",
"\n",
" logits, (hidden, cell) = atten_decoder(cn_word, hidden, cell, en_repr)\n",
" step_loss = F.softmax_with_cross_entropy(logits, cn_word_label)\n",
" avg_step_loss = paddle.mean(step_loss)\n",
" loss += avg_step_loss\n",
"\n",
" loss = loss / (MAX_LEN + 2)\n",
" if(iteration % 200 == 0):\n",
" print(\"iter {}, loss:{}\".format(iteration, loss.numpy()))\n",
"\n",
" loss.backward()\n",
" opt.minimize(loss)\n",
" encoder.clear_gradients()\n",
" atten_decoder.clear_gradients()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用模型进行机器翻译\n",
"\n",
"根据你所使用的计算设备的不同,上面的训练过程可能需要不等的时间。(在一台Mac笔记本上,大约耗时15~20分钟)\n",
"完成上面的模型训练之后,我们可以得到一个能够从英文翻译成中文的机器翻译模型。接下来我们通过一个greedy search来实现使用该模型完成实际的机器翻译。(实际的任务中,你可能需要用beam search算法来提升效果)"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"i agree with him\n",
"true: 我同意他。\n",
"pred: 我同意他。\n",
"i think i ll take a bath tonight\n",
"true: 我想我今晚會洗澡。\n",
"pred: 我想我今晚會洗澡。\n",
"he asked for a drink of water\n",
"true: 他要了水喝。\n",
"pred: 他喝了一杯水。\n",
"i began running\n",
"true: 我開始跑。\n",
"pred: 我開始跑。\n",
"i m sick\n",
"true: 我生病了。\n",
"pred: 我生病了。\n",
"you had better go to the dentist s\n",
"true: 你最好去看牙醫。\n",
"pred: 你最好去看牙醫。\n",
"we went for a walk in the forest\n",
"true: 我们去了林中散步。\n",
"pred: 我們去公园散步。\n",
"you ve arrived very early\n",
"true: 你來得很早。\n",
"pred: 你去早个。\n",
"he pretended not to be listening\n",
"true: 他裝作沒在聽。\n",
"pred: 他假装聽到它。\n",
"he always wanted to study japanese\n",
"true: 他一直想學日語。\n",
"pred: 他一直想學日語。\n"
]
}
],
"source": [
"encoder.eval()\n",
"atten_decoder.eval()\n",
"\n",
"num_of_exampels_to_evaluate = 10\n",
"\n",
"indices = np.random.choice(len(train_en_sents), num_of_exampels_to_evaluate, replace=False)\n",
"x_data = train_en_sents[indices]\n",
"sent = paddle.to_tensor(x_data)\n",
"en_repr = encoder(sent)\n",
"\n",
"word = np.array(\n",
" [[cn_vocab['<bos>']]] * num_of_exampels_to_evaluate\n",
")\n",
"word = paddle.to_tensor(word)\n",
"\n",
"hidden = paddle.zeros([num_of_exampels_to_evaluate, 1, hidden_size])\n",
"cell = paddle.zeros([num_of_exampels_to_evaluate, 1, hidden_size])\n",
"\n",
"decoded_sent = []\n",
"for i in range(MAX_LEN + 2):\n",
" logits, (hidden, cell) = atten_decoder(word, hidden, cell, en_repr)\n",
" word = paddle.argmax(logits, axis=1)\n",
" decoded_sent.append(word.numpy())\n",
" word = paddle.unsqueeze(word, axis=-1)\n",
" \n",
"results = np.stack(decoded_sent, axis=1)\n",
"for i in range(num_of_exampels_to_evaluate):\n",
" en_input = \" \".join(filtered_pairs[indices[i]][0])\n",
" ground_truth_translate = \"\".join(filtered_pairs[indices[i]][1])\n",
" model_translate = \"\"\n",
" for k in results[i]:\n",
" w = list(cn_vocab)[k]\n",
" if w != '<pad>' and w != '<eos>':\n",
" model_translate += w\n",
" print(en_input)\n",
" print(\"true: {}\".format(ground_truth_translate))\n",
" print(\"pred: {}\".format(model_translate))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The End\n",
"\n",
"你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的机器翻译的效果。同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
使用注意力机制的LSTM的机器翻译
==============================
本示例教程介绍如何使用飞桨完成一个机器翻译任务。我们将会使用飞桨提供的LSTM的API,组建一个\ ``sequence to sequence with attention``\ 的机器翻译的模型,并在示例的数据集上完成从英文翻译成中文的机器翻译。
环境设置
---------
本示例教程基于飞桨2.0-beta版本。
.. code:: ipython3
import paddle
import paddle.nn.functional as F
import re
import numpy as np
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
下载数据集
------------
我们将使用 http://www.manythings.org/anki/
提供的中英文的英汉句对作为数据集,来完成本任务。该数据集含有23610个中英文双语的句对。
.. code:: ipython3
!wget -c https://www.manythings.org/anki/cmn-eng.zip && unzip cmn-eng.zip
.. parsed-literal::
--2020-09-04 16:13:35-- https://www.manythings.org/anki/cmn-eng.zip
Resolving www.manythings.org (www.manythings.org)... 104.24.109.196, 172.67.173.198, 2606:4700:3037::6818:6cc4, ...
Connecting to www.manythings.org (www.manythings.org)|104.24.109.196|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 1030722 (1007K) [application/zip]
Saving to: ‘cmn-eng.zip’
cmn-eng.zip 100%[===================>] 1007K 520KB/s in 1.9s
2020-09-04 16:13:38 (520 KB/s) - ‘cmn-eng.zip’ saved [1030722/1030722]
Archive: cmn-eng.zip
inflating: cmn.txt
inflating: _about.txt
.. code:: ipython3
!wc -l cmn.txt
.. parsed-literal::
23610 cmn.txt
构建双语句对的数据结构
-------------------------
接下来我们通过处理下载下来的双语句对的文本文件,将双语句对读入到python的数据结构中。这里做了如下的处理。
- 对于英文,会把全部英文都变成小写,并只保留英文的单词。
- 对于中文,为了简便起见,未做分词,按照字做了切分。
- 为了后续的程序运行的更快,我们通过限制句子长度,和只保留部分英文单词开头的句子的方式,得到了一个较小的数据集。这样得到了一个有5508个句对的数据集。
.. code:: ipython3
MAX_LEN = 10
.. code:: ipython3
lines = open('cmn.txt', encoding='utf-8').read().strip().split('\n')
words_re = re.compile(r'\w+')
pairs = []
for l in lines:
en_sent, cn_sent, _ = l.split('\t')
pairs.append((words_re.findall(en_sent.lower()), list(cn_sent)))
# create a smaller dataset to make the demo process faster
filtered_pairs = []
for x in pairs:
if len(x[0]) < MAX_LEN and len(x[1]) < MAX_LEN and \
x[0][0] in ('i', 'you', 'he', 'she', 'we', 'they'):
filtered_pairs.append(x)
print(len(filtered_pairs))
for x in filtered_pairs[:10]: print(x)
.. parsed-literal::
5508
(['i', 'won'], ['我', '赢', '了', '。'])
(['he', 'ran'], ['他', '跑', '了', '。'])
(['i', 'quit'], ['我', '退', '出', '。'])
(['i', 'm', 'ok'], ['我', '沒', '事', '。'])
(['i', 'm', 'up'], ['我', '已', '经', '起', '来', '了', '。'])
(['we', 'try'], ['我', '们', '来', '试', '试', '。'])
(['he', 'came'], ['他', '来', '了', '。'])
(['he', 'runs'], ['他', '跑', '。'])
(['i', 'agree'], ['我', '同', '意', '。'])
(['i', 'm', 'ill'], ['我', '生', '病', '了', '。'])
创建词表
----------
接下来我们分别创建中英文的词表,这两份词表会用来将英文和中文的句子转换为词的ID构成的序列。词表中还加入了如下三个特殊的词:
- ``<pad>``: 用来对较短的句子进行填充。 - ``<bos>``: “begin of
sentence”, 表示句子的开始的特殊词。 - ``<eos>``: “end of sentence”,
表示句子的结束的特殊词。
Note:
在实际的任务中,可能还需要通过\ ``<unk>``\ (或者\ ``<oov>``\ )特殊词来表示未在词表中出现的词。
.. code:: ipython3
en_vocab = {}
cn_vocab = {}
# create special token for pad, begin of sentence, end of sentence
en_vocab['<pad>'], en_vocab['<bos>'], en_vocab['<eos>'] = 0, 1, 2
cn_vocab['<pad>'], cn_vocab['<bos>'], cn_vocab['<eos>'] = 0, 1, 2
en_idx, cn_idx = 3, 3
for en, cn in filtered_pairs:
for w in en:
if w not in en_vocab:
en_vocab[w] = en_idx
en_idx += 1
for w in cn:
if w not in cn_vocab:
cn_vocab[w] = cn_idx
cn_idx += 1
print(len(list(en_vocab)))
print(len(list(cn_vocab)))
.. parsed-literal::
2539
2039
创建padding过的数据集
-----------------------------
接下来根据词表,我们将会创建一份实际的用于训练的用numpy
array组织起来的数据集。 -
所有的句子都通过\ ``<pad>``\ 补充成为了长度相同的句子。 -
对于英文句子(源语言),我们将其反转了过来,这会带来更好的翻译的效果。 -
所创建的\ ``padded_cn_label_sents``\ 是训练过程中的预测的目标,即,每个中文的当前词去预测下一个词是什么词。
.. code:: ipython3
padded_en_sents = []
padded_cn_sents = []
padded_cn_label_sents = []
for en, cn in filtered_pairs:
# reverse source sentence
padded_en_sent = en + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(en))
padded_en_sent.reverse()
padded_cn_sent = ['<bos>'] + cn + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(cn))
padded_cn_label_sent = cn + ['<eos>'] + ['<pad>'] * (MAX_LEN - len(cn) + 1)
padded_en_sents.append([en_vocab[w] for w in padded_en_sent])
padded_cn_sents.append([cn_vocab[w] for w in padded_cn_sent])
padded_cn_label_sents.append([cn_vocab[w] for w in padded_cn_label_sent])
train_en_sents = np.array(padded_en_sents)
train_cn_sents = np.array(padded_cn_sents)
train_cn_label_sents = np.array(padded_cn_label_sents)
print(train_en_sents.shape)
print(train_cn_sents.shape)
print(train_cn_label_sents.shape)
.. parsed-literal::
(5508, 11)
(5508, 12)
(5508, 12)
创建网络
---------
我们将会创建一个Encoder-AttentionDecoder架构的模型结构用来完成机器翻译任务。
首先我们将设置一些必要的网络结构中用到的参数。
.. code:: ipython3
embedding_size = 128
hidden_size = 256
num_encoder_lstm_layers = 1
en_vocab_size = len(list(en_vocab))
cn_vocab_size = len(list(cn_vocab))
epochs = 20
batch_size = 16
Encoder部分
----------------
在编码器的部分,我们通过查找完Embedding之后接一个LSTM的方式构建一个对源语言编码的网络。飞桨的RNN系列的API,除了LSTM之外,还提供了SimleRNN,
GRU供使用,同时,还可以使用反向RNN,双向RNN,多层RNN等形式。也可以通过\ ``dropout``\ 参数设置是否对多层RNN的中间层进行\ ``dropout``\ 处理,来防止过拟合。
除了使用序列到序列的RNN操作之外,也可以通过SimpleRNN, GRUCell,
LSTMCell等API更灵活的创建单步的RNN计算,甚至通过继承RNNCellBase来实现自己的RNN计算单元。
.. code:: ipython3
# encoder: simply learn representation of source sentence
class Encoder(paddle.nn.Layer):
def __init__(self):
super(Encoder, self).__init__()
self.emb = paddle.nn.Embedding(en_vocab_size, embedding_size,)
self.lstm = paddle.nn.LSTM(input_size=embedding_size,
hidden_size=hidden_size,
num_layers=num_encoder_lstm_layers)
def forward(self, x):
x = self.emb(x)
x, (_, _) = self.lstm(x)
return x
AttentionDecoder部分
------------------------
在解码器部分,我们通过一个带有注意力机制的LSTM来完成解码。
- 单步的LSTM:在解码器的实现的部分,我们同样使用LSTM,与Encoder部分不同的是,下面的代码,每次只让LSTM往前计算一次。整体的recurrent部分,是在训练循环内完成的。
- 注意力机制:这里使用了一个由两个Linear组成的网络来完成注意力机制的计算,它用来计算出目标语言在每次翻译一个词的时候,需要对源语言当中的每个词需要赋予多少的权重。
- 对于第一次接触这样的网络结构来说,下面的代码在理解起来可能稍微有些复杂,你可以通过插入打印每个tensor在不同步骤时的形状的方式来更好的理解。
.. code:: ipython3
# only move one step of LSTM,
# the recurrent loop is implemented inside training loop
class AttentionDecoder(paddle.nn.Layer):
def __init__(self):
super(AttentionDecoder, self).__init__()
self.emb = paddle.nn.Embedding(cn_vocab_size, embedding_size)
self.lstm = paddle.nn.LSTM(input_size=embedding_size + hidden_size,
hidden_size=hidden_size)
# for computing attention weights
self.attention_linear1 = paddle.nn.Linear(hidden_size * 2, hidden_size)
self.attention_linear2 = paddle.nn.Linear(hidden_size, 1)
# for computing output logits
self.outlinear =paddle.nn.Linear(hidden_size, cn_vocab_size)
def forward(self, x, previous_hidden, previous_cell, encoder_outputs):
x = self.emb(x)
attention_inputs = paddle.concat((encoder_outputs,
paddle.tile(previous_hidden, repeat_times=[1, MAX_LEN+1, 1])),
axis=-1
)
attention_hidden = self.attention_linear1(attention_inputs)
attention_hidden = F.tanh(attention_hidden)
attention_logits = self.attention_linear2(attention_hidden)
attention_logits = paddle.squeeze(attention_logits)
attention_weights = F.softmax(attention_logits)
attention_weights = paddle.expand_as(paddle.unsqueeze(attention_weights, -1),
encoder_outputs)
context_vector = paddle.multiply(encoder_outputs, attention_weights)
context_vector = paddle.reduce_sum(context_vector, 1)
context_vector = paddle.unsqueeze(context_vector, 1)
lstm_input = paddle.concat((x, context_vector), axis=-1)
# LSTM requirement to previous hidden/state:
# (number_of_layers * direction, batch, hidden)
previous_hidden = paddle.transpose(previous_hidden, [1, 0, 2])
previous_cell = paddle.transpose(previous_cell, [1, 0, 2])
x, (hidden, cell) = self.lstm(lstm_input, (previous_hidden, previous_cell))
# change the return to (batch, number_of_layers * direction, hidden)
hidden = paddle.transpose(hidden, [1, 0, 2])
cell = paddle.transpose(cell, [1, 0, 2])
output = self.outlinear(hidden)
output = paddle.squeeze(output)
return output, (hidden, cell)
训练模型
--------
接下来我们开始训练模型。
- 在每个epoch开始之前,我们对训练数据进行了随机打乱。
- 我们通过多次调用\ ``atten_decoder``\ ,在这里实现了解码时的recurrent循环。
- ``teacher forcing``\ 策略:
在每次解码下一个词时,我们给定了训练数据当中的真实词作为了预测下一个词时的输入。相应的,你也可以尝试用模型预测的结果作为下一个词的输入。(或者混合使用)
.. code:: ipython3
encoder = Encoder()
atten_decoder = AttentionDecoder()
opt = paddle.optimizer.Adam(learning_rate=0.001,
parameters=encoder.parameters()+atten_decoder.parameters())
for epoch in range(epochs):
print("epoch:{}".format(epoch))
# shuffle training data
perm = np.random.permutation(len(train_en_sents))
train_en_sents_shuffled = train_en_sents[perm]
train_cn_sents_shuffled = train_cn_sents[perm]
train_cn_label_sents_shuffled = train_cn_label_sents[perm]
for iteration in range(train_en_sents_shuffled.shape[0] // batch_size):
x_data = train_en_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]
sent = paddle.to_tensor(x_data)
en_repr = encoder(sent)
x_cn_data = train_cn_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]
x_cn_label_data = train_cn_label_sents_shuffled[(batch_size*iteration):(batch_size*(iteration+1))]
# shape: (batch, num_layer(=1 here) * num_of_direction(=1 here), hidden_size)
hidden = paddle.zeros([batch_size, 1, hidden_size])
cell = paddle.zeros([batch_size, 1, hidden_size])
loss = paddle.zeros([1])
# the decoder recurrent loop mentioned above
for i in range(MAX_LEN + 2):
cn_word = paddle.to_tensor(x_cn_data[:,i:i+1])
cn_word_label = paddle.to_tensor(x_cn_label_data[:,i:i+1])
logits, (hidden, cell) = atten_decoder(cn_word, hidden, cell, en_repr)
step_loss = F.softmax_with_cross_entropy(logits, cn_word_label)
avg_step_loss = paddle.mean(step_loss)
loss += avg_step_loss
loss = loss / (MAX_LEN + 2)
if(iteration % 200 == 0):
print("iter {}, loss:{}".format(iteration, loss.numpy()))
loss.backward()
opt.minimize(loss)
encoder.clear_gradients()
atten_decoder.clear_gradients()
.. parsed-literal::
epoch:0
iter 0, loss:[7.6194725]
iter 200, loss:[3.4147663]
epoch:1
iter 0, loss:[3.0931656]
iter 200, loss:[2.7543137]
epoch:2
iter 0, loss:[2.8413522]
iter 200, loss:[2.340513]
epoch:3
iter 0, loss:[2.597812]
iter 200, loss:[2.5552855]
epoch:4
iter 0, loss:[2.0783448]
iter 200, loss:[2.4544785]
epoch:5
iter 0, loss:[1.8709135]
iter 200, loss:[1.8736631]
epoch:6
iter 0, loss:[1.9589291]
iter 200, loss:[2.119414]
epoch:7
iter 0, loss:[1.5829577]
iter 200, loss:[1.6002902]
epoch:8
iter 0, loss:[1.6022769]
iter 200, loss:[1.52694]
epoch:9
iter 0, loss:[1.3616685]
iter 200, loss:[1.5420443]
epoch:10
iter 0, loss:[1.0397792]
iter 200, loss:[1.2458231]
epoch:11
iter 0, loss:[1.2107158]
iter 200, loss:[1.426417]
epoch:12
iter 0, loss:[1.1840894]
iter 200, loss:[1.0999664]
epoch:13
iter 0, loss:[1.0968472]
iter 200, loss:[0.8149167]
epoch:14
iter 0, loss:[0.95585203]
iter 200, loss:[1.0070628]
epoch:15
iter 0, loss:[0.89463925]
iter 200, loss:[0.8288595]
epoch:16
iter 0, loss:[0.5672495]
iter 200, loss:[0.7317069]
epoch:17
iter 0, loss:[0.76785177]
iter 200, loss:[0.5319323]
epoch:18
iter 0, loss:[0.5250005]
iter 200, loss:[0.4182841]
epoch:19
iter 0, loss:[0.52320284]
iter 200, loss:[0.47618982]
使用模型进行机器翻译
-----------------------
根据你所使用的计算设备的不同,上面的训练过程可能需要不等的时间。(在一台Mac笔记本上,大约耗时15~20分钟)
完成上面的模型训练之后,我们可以得到一个能够从英文翻译成中文的机器翻译模型。接下来我们通过一个greedy
search来实现使用该模型完成实际的机器翻译。(实际的任务中,你可能需要用beam
search算法来提升效果)
.. code:: ipython3
encoder.eval()
atten_decoder.eval()
num_of_exampels_to_evaluate = 10
indices = np.random.choice(len(train_en_sents), num_of_exampels_to_evaluate, replace=False)
x_data = train_en_sents[indices]
sent = paddle.to_tensor(x_data)
en_repr = encoder(sent)
word = np.array(
[[cn_vocab['<bos>']]] * num_of_exampels_to_evaluate
)
word = paddle.to_tensor(word)
hidden = paddle.zeros([num_of_exampels_to_evaluate, 1, hidden_size])
cell = paddle.zeros([num_of_exampels_to_evaluate, 1, hidden_size])
decoded_sent = []
for i in range(MAX_LEN + 2):
logits, (hidden, cell) = atten_decoder(word, hidden, cell, en_repr)
word = paddle.argmax(logits, axis=1)
decoded_sent.append(word.numpy())
word = paddle.unsqueeze(word, axis=-1)
results = np.stack(decoded_sent, axis=1)
for i in range(num_of_exampels_to_evaluate):
en_input = " ".join(filtered_pairs[indices[i]][0])
ground_truth_translate = "".join(filtered_pairs[indices[i]][1])
model_translate = ""
for k in results[i]:
w = list(cn_vocab)[k]
if w != '<pad>' and w != '<eos>':
model_translate += w
print(en_input)
print("true: {}".format(ground_truth_translate))
print("pred: {}".format(model_translate))
.. parsed-literal::
i agree with him
true: 我同意他。
pred: 我同意他。
i think i ll take a bath tonight
true: 我想我今晚會洗澡。
pred: 我想我今晚會洗澡。
he asked for a drink of water
true: 他要了水喝。
pred: 他喝了一杯水。
i began running
true: 我開始跑。
pred: 我開始跑。
i m sick
true: 我生病了。
pred: 我生病了。
you had better go to the dentist s
true: 你最好去看牙醫。
pred: 你最好去看牙醫。
we went for a walk in the forest
true: 我们去了林中散步。
pred: 我們去公园散步。
you ve arrived very early
true: 你來得很早。
pred: 你去早个。
he pretended not to be listening
true: 他裝作沒在聽。
pred: 他假装聽到它。
he always wanted to study japanese
true: 他一直想學日語。
pred: 他一直想學日語。
The End
-------
你还可以通过变换网络结构,调整数据集,尝试不同的参数的方式来进一步提升本示例当中的机器翻译的效果。同时,也可以尝试在其他的类似的任务中用飞桨来完成实际的实践。
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 动态图\n",
"\n",
"从飞桨开源框架2.0beta版本开始,飞桨默认为用户开启了动态图模式。在这种模式下,每次执行一个运算,可以立即得到结果(而不是事先定义好网络结构,然后再执行)。\n",
"\n",
"在动态图模式下,您可以更加方便的组织代码,更容易的调试程序,本示例教程将向你介绍飞桨的动态图的使用。\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 设置环境\n",
"\n",
"我们将使用飞桨2.0beta版本,并确认已经开启了动态图模式。"
]
},
{
"cell_type": "code",
"execution_count": 2,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n",
"89af2088b6e74bdfeef2d4d78e08461ed2aafee5\n"
]
}
],
"source": [
"import paddle\n",
"import paddle.nn.functional as F\n",
"import numpy as np\n",
"\n",
"paddle.disable_static()\n",
"print(paddle.__version__)\n",
"print(paddle.__git_commit__)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 基本用法\n",
"\n",
"在动态图模式下,您可以直接运行一个飞桨提供的API,它会立刻返回结果到python。不再需要首先创建一个计算图,然后再给定数据去运行。"
]
},
{
"cell_type": "code",
"execution_count": 3,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[[-0.49341336 -0.8112665 ]\n",
" [ 0.8929015 0.24661176]\n",
" [-0.64440054 -0.7945008 ]\n",
" [-0.07345356 1.3641853 ]]\n",
"[1. 2.]\n",
"[[0.5065867 1.1887336 ]\n",
" [1.8929014 2.2466118 ]\n",
" [0.35559946 1.2054992 ]\n",
" [0.92654645 3.3641853 ]]\n",
"[-2.1159463 1.386125 -2.2334023 2.654917 ]\n"
]
}
],
"source": [
"a = paddle.randn([4, 2])\n",
"b = paddle.arange(1, 3, dtype='float32')\n",
"\n",
"print(a.numpy())\n",
"print(b.numpy())\n",
"\n",
"c = a + b\n",
"print(c.numpy())\n",
"\n",
"d = paddle.matmul(a, b)\n",
"print(d.numpy())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 使用python的控制流\n",
"\n",
"动态图模式下,您可以使用python的条件判断和循环,这类控制语句来执行神经网络的计算。(不再需要`cond`, `loop`这类OP)\n"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 +> [5 6 7]\n",
"1 +> [5 7 9]\n",
"2 +> [ 5 9 15]\n",
"3 -> [-3 3 21]\n",
"4 -> [-3 11 75]\n",
"5 +> [ 5 37 249]\n",
"6 +> [ 5 69 735]\n",
"7 -> [ -3 123 2181]\n",
"8 +> [ 5 261 6567]\n",
"9 +> [ 5 517 19689]\n"
]
}
],
"source": [
"a = paddle.to_tensor(np.array([1, 2, 3]))\n",
"b = paddle.to_tensor(np.array([4, 5, 6]))\n",
"\n",
"for i in range(10):\n",
" r = paddle.rand([1,])\n",
" if r > 0.5:\n",
" c = paddle.pow(a, i) + b\n",
" print(\"{} +> {}\".format(i, c.numpy()))\n",
" else:\n",
" c = paddle.pow(a, i) - b\n",
" print(\"{} -> {}\".format(i, c.numpy()))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:控制流\n",
"\n",
"- 使用动态图可以用来创建更加灵活的网络,比如根据控制流选择不同的分支网络,和方便的构建权重共享的网络。接下来我们来看一个具体的例子,在这个例子中,第二个线性变换只有0.5的可能性会运行。\n",
"- 在sequence to sequence with attention的机器翻译的示例中,你会看到更实际的使用动态图构建RNN类的网络带来的灵活性。\n"
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [],
"source": [
"class MyModel(paddle.nn.Layer):\n",
" def __init__(self, input_size, hidden_size):\n",
" super(MyModel, self).__init__()\n",
" self.linear1 = paddle.nn.Linear(input_size, hidden_size)\n",
" self.linear2 = paddle.nn.Linear(hidden_size, hidden_size)\n",
" self.linear3 = paddle.nn.Linear(hidden_size, 1)\n",
"\n",
" def forward(self, inputs):\n",
" x = self.linear1(inputs)\n",
" x = F.relu(x)\n",
"\n",
" if paddle.rand([1,]) > 0.5: \n",
" x = self.linear2(x)\n",
" x = F.relu(x)\n",
"\n",
" x = self.linear3(x)\n",
" \n",
" return x "
]
},
{
"cell_type": "code",
"execution_count": 5,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0 [2.0915627]\n",
"200 [0.67530334]\n",
"400 [0.52042854]\n",
"600 [0.28010666]\n",
"800 [0.09739777]\n",
"1000 [0.09307177]\n",
"1200 [0.04252927]\n",
"1400 [0.03095707]\n",
"1600 [0.03022156]\n",
"1800 [0.01616007]\n",
"2000 [0.01069116]\n",
"2200 [0.0055158]\n",
"2400 [0.00195092]\n",
"2600 [0.00101116]\n",
"2800 [0.00192219]\n"
]
}
],
"source": [
"total_data, batch_size, input_size, hidden_size = 1000, 64, 128, 256\n",
"\n",
"x_data = np.random.randn(total_data, input_size).astype(np.float32)\n",
"y_data = np.random.randn(total_data, 1).astype(np.float32)\n",
"\n",
"model = MyModel(input_size, hidden_size)\n",
"\n",
"loss_fn = paddle.nn.MSELoss(reduction='mean')\n",
"optimizer = paddle.optimizer.SGD(learning_rate=0.01, \n",
" parameters=model.parameters())\n",
"\n",
"for t in range(200 * (total_data // batch_size)):\n",
" idx = np.random.choice(total_data, batch_size, replace=False)\n",
" x = paddle.to_tensor(x_data[idx,:])\n",
" y = paddle.to_tensor(y_data[idx,:])\n",
" y_pred = model(x)\n",
"\n",
" loss = loss_fn(y_pred, y)\n",
" if t % 200 == 0:\n",
" print(t, loss.numpy())\n",
"\n",
" loss.backward()\n",
" optimizer.minimize(loss)\n",
" model.clear_gradients()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 构建更加灵活的网络:共享权重\n",
"\n",
"- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder的示例。\n",
"- 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。"
]
},
{
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"step: 0, loss: [0.37666085]\n",
"step: 1, loss: [0.3063845]\n",
"step: 2, loss: [0.2647248]\n",
"step: 3, loss: [0.23831272]\n",
"step: 4, loss: [0.21714918]\n",
"step: 5, loss: [0.1955545]\n",
"step: 6, loss: [0.17261818]\n",
"step: 7, loss: [0.15009595]\n",
"step: 8, loss: [0.13051331]\n",
"step: 9, loss: [0.11537809]\n"
]
}
],
"source": [
"inputs = paddle.rand((256, 64))\n",
"\n",
"linear = paddle.nn.Linear(64, 8, bias_attr=False)\n",
"loss_fn = paddle.nn.MSELoss()\n",
"optimizer = paddle.optimizer.Adam(0.01, parameters=linear.parameters())\n",
"\n",
"for i in range(10):\n",
" hidden = linear(inputs)\n",
" # weight from input to hidden is shared with the linear mapping from hidden to output\n",
" outputs = paddle.matmul(hidden, linear.weight, transpose_y=True) \n",
" loss = loss_fn(outputs, inputs)\n",
" loss.backward()\n",
" print(\"step: {}, loss: {}\".format(i, loss.numpy()))\n",
" optimizer.minimize(loss)\n",
" linear.clear_gradients()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# The end\n",
"\n",
"可以看到使用动态图带来了更灵活易用的方式来组网和训练。"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
动态图
======
从飞桨开源框架2.0beta版本开始,飞桨默认为用户开启了动态图模式。在这种模式下,每次执行一个运算,可以立即得到结果(而不是事先定义好网络结构,然后再执行)。
在动态图模式下,您可以更加方便的组织代码,更容易的调试程序,本示例教程将向你介绍飞桨的动态图的使用。
设置环境
--------
我们将使用飞桨2.0beta版本,并确认已经开启了动态图模式。
.. code:: ipython3
import paddle
import paddle.nn.functional as F
import numpy as np
paddle.disable_static()
print(paddle.__version__)
print(paddle.__git_commit__)
.. parsed-literal::
0.0.0
89af2088b6e74bdfeef2d4d78e08461ed2aafee5
基本用法
--------
在动态图模式下,您可以直接运行一个飞桨提供的API,它会立刻返回结果到python。不再需要首先创建一个计算图,然后再给定数据去运行。
.. code:: ipython3
a = paddle.randn([4, 2])
b = paddle.arange(1, 3, dtype='float32')
print(a.numpy())
print(b.numpy())
c = a + b
print(c.numpy())
d = paddle.matmul(a, b)
print(d.numpy())
.. parsed-literal::
[[-0.49341336 -0.8112665 ]
[ 0.8929015 0.24661176]
[-0.64440054 -0.7945008 ]
[-0.07345356 1.3641853 ]]
[1. 2.]
[[0.5065867 1.1887336 ]
[1.8929014 2.2466118 ]
[0.35559946 1.2054992 ]
[0.92654645 3.3641853 ]]
[-2.1159463 1.386125 -2.2334023 2.654917 ]
使用python的控制流
------------------
动态图模式下,您可以使用python的条件判断和循环,这类控制语句来执行神经网络的计算。(不再需要\ ``cond``,
``loop``\ 这类OP)
.. code:: ipython3
a = paddle.to_tensor(np.array([1, 2, 3]))
b = paddle.to_tensor(np.array([4, 5, 6]))
for i in range(10):
r = paddle.rand([1,])
if r > 0.5:
c = paddle.pow(a, i) + b
print("{} +> {}".format(i, c.numpy()))
else:
c = paddle.pow(a, i) - b
print("{} -> {}".format(i, c.numpy()))
.. parsed-literal::
0 +> [5 6 7]
1 +> [5 7 9]
2 +> [ 5 9 15]
3 -> [-3 3 21]
4 -> [-3 11 75]
5 +> [ 5 37 249]
6 +> [ 5 69 735]
7 -> [ -3 123 2181]
8 +> [ 5 261 6567]
9 +> [ 5 517 19689]
构建更加灵活的网络:控制流
-------------------------------
- 使用动态图可以用来创建更加灵活的网络,比如根据控制流选择不同的分支网络,和方便的构建权重共享的网络。接下来我们来看一个具体的例子,在这个例子中,第二个线性变换只有0.5的可能性会运行。
- sequence to sequence with
attention的机器翻译的示例中,你会看到更实际的使用动态图构建RNN类的网络带来的灵活性。
.. code:: ipython3
class MyModel(paddle.nn.Layer):
def __init__(self, input_size, hidden_size):
super(MyModel, self).__init__()
self.linear1 = paddle.nn.Linear(input_size, hidden_size)
self.linear2 = paddle.nn.Linear(hidden_size, hidden_size)
self.linear3 = paddle.nn.Linear(hidden_size, 1)
def forward(self, inputs):
x = self.linear1(inputs)
x = F.relu(x)
if paddle.rand([1,]) > 0.5:
x = self.linear2(x)
x = F.relu(x)
x = self.linear3(x)
return x
.. code:: ipython3
total_data, batch_size, input_size, hidden_size = 1000, 64, 128, 256
x_data = np.random.randn(total_data, input_size).astype(np.float32)
y_data = np.random.randn(total_data, 1).astype(np.float32)
model = MyModel(input_size, hidden_size)
loss_fn = paddle.nn.MSELoss(reduction='mean')
optimizer = paddle.optimizer.SGD(learning_rate=0.01,
parameters=model.parameters())
for t in range(200 * (total_data // batch_size)):
idx = np.random.choice(total_data, batch_size, replace=False)
x = paddle.to_tensor(x_data[idx,:])
y = paddle.to_tensor(y_data[idx,:])
y_pred = model(x)
loss = loss_fn(y_pred, y)
if t % 200 == 0:
print(t, loss.numpy())
loss.backward()
optimizer.minimize(loss)
model.clear_gradients()
.. parsed-literal::
0 [2.0915627]
200 [0.67530334]
400 [0.52042854]
600 [0.28010666]
800 [0.09739777]
1000 [0.09307177]
1200 [0.04252927]
1400 [0.03095707]
1600 [0.03022156]
1800 [0.01616007]
2000 [0.01069116]
2200 [0.0055158]
2400 [0.00195092]
2600 [0.00101116]
2800 [0.00192219]
构建更加灵活的网络:共享权重
---------------------------------
- 使用动态图还可以更加方便的创建共享权重的网络,下面的示例展示了一个共享了权重的简单的AutoEncoder的示例。
- 你也可以参考图像搜索的示例看到共享参数权重的更实际的使用。
.. code:: ipython3
inputs = paddle.rand((256, 64))
linear = paddle.nn.Linear(64, 8, bias_attr=False)
loss_fn = paddle.nn.MSELoss()
optimizer = paddle.optimizer.Adam(0.01, parameters=linear.parameters())
for i in range(10):
hidden = linear(inputs)
# weight from input to hidden is shared with the linear mapping from hidden to output
outputs = paddle.matmul(hidden, linear.weight, transpose_y=True)
loss = loss_fn(outputs, inputs)
loss.backward()
print("step: {}, loss: {}".format(i, loss.numpy()))
optimizer.minimize(loss)
linear.clear_gradients()
.. parsed-literal::
step: 0, loss: [0.37666085]
step: 1, loss: [0.3063845]
step: 2, loss: [0.2647248]
step: 3, loss: [0.23831272]
step: 4, loss: [0.21714918]
step: 5, loss: [0.1955545]
step: 6, loss: [0.17261818]
step: 7, loss: [0.15009595]
step: 8, loss: [0.13051331]
step: 9, loss: [0.11537809]
The end
--------
可以看到使用动态图带来了更灵活易用的方式来组网和训练。
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 快速上手飞桨(PaddlePaddle)\n",
"\n",
"本示例通过一个基础案例带您从一个飞桨新手快速掌握如何使用。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 1. 安装飞桨\n",
"\n",
"如果您已经安装好飞桨那么可以跳过此步骤。我们针对用户提供了一个方便易用的安装引导页面,您可以通过选择自己的系统和软件版本来获取对应的安装命令,具体可以点击[快速安装](https://www.paddlepaddle.org.cn/install/quick)查看。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. 导入飞桨\n",
"\n",
"这个示例我们采用了Notebook的形式来进行编写,您可以直接通过AIStudio或Jupyter等平台工具来运行这个案例,Notebook的好处是可以通过浏览器来运行Python程序,边看教程边运行结果,可以对比学习,并且可以做到单步运行调试。\n",
"\n",
"安装好飞桨后我们就可以在Python程序中进行飞桨的导入。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"'0.0.0'"
]
},
"execution_count": 4,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"import paddle\n",
"\n",
"paddle.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. 实践一个手写数字识别任务\n",
"\n",
"对于深度学习任务如果简单来看,其实分为几个核心步骤:1. 数据集的准备和加载;2. 模型的构建;3.模型训练;4.模型评估。那么接下来我们就一步一步带您通过飞桨的少量API快速实现。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.1 数据加载\n",
"\n",
"加载我们框架为您准备好的一个手写数字识别数据集。这里我们使用两个数据集,一个用来做模型的训练,一个用来做模型的评估。"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {},
"outputs": [],
"source": [
"train_dataset = paddle.vision.datasets.MNIST(mode='train', chw_format=False)\n",
"val_dataset = paddle.vision.datasets.MNIST(mode='test', chw_format=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 模型搭建\n",
"\n",
"通过Sequential将一层一层的网络结构组建起来。通过数据集加载接口的chw_format参数我们已经将[1, 28, 28]形状的图片数据改变形状为[1, 784],那么在组网过程中不在需要先进行Flatten操作。"
]
},
{
"cell_type": "code",
"execution_count": 35,
"metadata": {},
"outputs": [],
"source": [
"mnist = paddle.nn.Sequential(\n",
" paddle.nn.Linear(784, 512),\n",
" paddle.nn.ReLU(),\n",
" paddle.nn.Dropout(0.2),\n",
" paddle.nn.Linear(512, 10)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 模型训练\n",
"\n",
"配置好我们模型训练需要的损失计算方法和优化方法后就可以使用fit接口来开启我们的模型训练过程。"
]
},
{
"cell_type": "code",
"execution_count": 36,
"metadata": {
"scrolled": true
},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/5\n",
"step 1875/1875 [==============================] - loss: 0.2571 - acc: 0.9037 - 10ms/step \n",
"Epoch 2/5\n",
"step 1875/1875 [==============================] - loss: 0.1880 - acc: 0.9458 - 14ms/step \n",
"Epoch 3/5\n",
"step 1875/1875 [==============================] - loss: 0.0279 - acc: 0.9549 - 11ms/step \n",
"Epoch 4/5\n",
"step 1875/1875 [==============================] - loss: 0.0505 - acc: 0.9608 - 13ms/step \n",
"Epoch 5/5\n",
"step 1875/1875 [==============================] - loss: 0.2253 - acc: 0.9646 - 12ms/step \n"
]
}
],
"source": [
"# 开启动态图模式\n",
"paddle.disable_static() \n",
"\n",
"# 预计模型结构生成模型实例,便于进行后续的配置、训练和验证\n",
"model = paddle.Model(mnist) \n",
"\n",
"# 模型训练相关配置,准备损失计算方法,优化器和精度计算方法\n",
"model.prepare(paddle.optimizer.Adam(parameters=mnist.parameters()),\n",
" paddle.nn.CrossEntropyLoss(),\n",
" paddle.metric.Accuracy())\n",
"\n",
"# 开始模型训练\n",
"model.fit(train_dataset,\n",
" epochs=5, \n",
" batch_size=32,\n",
" verbose=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.4 模型评估\n",
"\n",
"使用我们刚才训练得到的模型参数进行模型的评估操作,看看我们的模型精度如何。"
]
},
{
"cell_type": "code",
"execution_count": 37,
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"{'loss': [3.576278e-07], 'acc': 0.9666}"
]
},
"execution_count": 37,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"model.evaluate(val_dataset, verbose=0)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"那么初步训练得到的模型效果在97%附近,我们可以进一步通过调整其中的训练参数来提升我们的模型精度。\n",
"\n",
"至此我们可以知道如何通过飞桨的几个简单API来快速完成一个深度学习任务,大家可以针对自己的需求来更换其中的代码,如果需要使用自己的数据集,那么可以更换数据集加载部分程序,如果需要替换模型,那么可以更改模型代码实现等等。我们也为大家提供了很多其他场景的示例代码来教大家如何使用我们的飞桨API,大家可以查看下面的链接或通过页面导航来查看自己感兴趣的部分。\n",
"\n",
"TODO:补充其他示例教程的快速链接。"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3.7.4 64-bit",
"language": "python",
"name": "python37464bitc4da1ac836094043840bff631bedbf7f"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
快速上手飞桨(PaddlePaddle
============================
本示例通过一个基础案例带您从一个飞桨新手快速掌握如何使用。
1. 安装飞桨
-----------
如果您已经安装好飞桨那么可以跳过此步骤。我们针对用户提供了一个方便易用的安装引导页面,您可以通过选择自己的系统和软件版本来获取对应的安装命令,具体可以点击\ `快速安装 <https://www.paddlepaddle.org.cn/install/quick>`__\ 查看。
2. 导入飞桨
-----------
这个示例我们采用了Notebook的形式来进行编写,您可以直接通过AIStudioJupyter等平台工具来运行这个案例,Notebook的好处是可以通过浏览器来运行Python程序,边看教程边运行结果,可以对比学习,并且可以做到单步运行调试。
安装好飞桨后我们就可以在Python程序中进行飞桨的导入。
.. code:: ipython3
import paddle
paddle.__version__
.. parsed-literal::
'0.0.0'
3. 实践一个手写数字识别任务
---------------------------
对于深度学习任务如果简单来看,其实分为几个核心步骤:1.
数据集的准备和加载;2.
模型的构建;3.模型训练;4.模型评估。那么接下来我们就一步一步带您通过飞桨的少量API快速实现。
3.1 数据加载
~~~~~~~~~~~~
加载我们框架为您准备好的一个手写数字识别数据集。这里我们使用两个数据集,一个用来做模型的训练,一个用来做模型的评估。
.. code:: ipython3
train_dataset = paddle.vision.datasets.MNIST(mode='train', chw_format=False)
val_dataset = paddle.vision.datasets.MNIST(mode='test', chw_format=False)
3.2 模型搭建
~~~~~~~~~~~~
通过Sequential将一层一层的网络结构组建起来。通过数据集加载接口的chw_format参数我们已经将[1,
28, 28]形状的图片数据改变形状为[1,
784],那么在组网过程中不在需要先进行Flatten操作。
.. code:: ipython3
mnist = paddle.nn.Sequential(
paddle.nn.Linear(784, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.2),
paddle.nn.Linear(512, 10)
)
3.3 模型训练
~~~~~~~~~~~~
配置好我们模型训练需要的损失计算方法和优化方法后就可以使用fit接口来开启我们的模型训练过程。
.. code:: ipython3
# 开启动态图模式
paddle.disable_static()
# 预计模型结构生成模型实例,便于进行后续的配置、训练和验证
model = paddle.Model(mnist)
# 模型训练相关配置,准备损失计算方法,优化器和精度计算方法
model.prepare(paddle.optimizer.Adam(parameters=mnist.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 开始模型训练
model.fit(train_dataset,
epochs=5,
batch_size=32,
verbose=1)
.. parsed-literal::
Epoch 1/5
step 1875/1875 [==============================] - loss: 0.2571 - acc: 0.9037 - 10ms/step
Epoch 2/5
step 1875/1875 [==============================] - loss: 0.1880 - acc: 0.9458 - 14ms/step
Epoch 3/5
step 1875/1875 [==============================] - loss: 0.0279 - acc: 0.9549 - 11ms/step
Epoch 4/5
step 1875/1875 [==============================] - loss: 0.0505 - acc: 0.9608 - 13ms/step
Epoch 5/5
step 1875/1875 [==============================] - loss: 0.2253 - acc: 0.9646 - 12ms/step
3.4 模型评估
~~~~~~~~~~~~
使用我们刚才训练得到的模型参数进行模型的评估操作,看看我们的模型精度如何。
.. code:: ipython3
model.evaluate(val_dataset, verbose=0)
.. parsed-literal::
{'loss': [3.576278e-07], 'acc': 0.9666}
那么初步训练得到的模型效果在97%附近,我们可以进一步通过调整其中的训练参数来提升我们的模型精度。
至此我们可以知道如何通过飞桨的几个简单API来快速完成一个深度学习任务,大家可以针对自己的需求来更换其中的代码,如果需要使用自己的数据集,那么可以更换数据集加载部分程序,如果需要替换模型,那么可以更改模型代码实现等等。我们也为大家提供了很多其他场景的示例代码来教大家如何使用我们的飞桨API,大家可以查看下面的链接或通过页面导航来查看自己感兴趣的部分。
TODO:补充其他示例教程的快速链接。
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Hello Paddle: 从普通程序走向机器学习程序"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这篇示例向你介绍普通的程序跟机器学习程序的区别,并带着你用飞桨框架,实现你的第一个机器学习程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 普通程序跟机器学习程序的逻辑区别\n",
"\n",
"作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello, world程序。\n",
"\n",
"学习飞桨也可以这样,这篇小示例教程将会通过一个非常简单的示例来向你展示如何开始使用飞桨。\n",
"\n",
"机器学习程序跟通常的程序最大的不同是,通常的程序是在给定输入的情况下,通过告诉计算机处理数据的规则,然后得到处理后的结果。而机器学习程序则是在并不知道这些规则的情况下,让机器来从数据当中**学习**出来规则。\n",
"\n",
"作为热身,我们先来看看通常的程序所做的事情。\n",
"\n",
"我们现在面临这样一个任务:\n",
"\n",
"我们乘坐出租车的时候,会有一个10元的起步价,只要上车就需要收取。出租车每行驶1公里,需要再支付每公里2元的行驶费用。当一个乘客坐完出租车之后,车上的计价器需要算出来该乘客需要支付的乘车费用。\n",
"\n",
"如果用python来实现该功能,会如下所示:"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"12.0\n",
"16.0\n",
"20.0\n",
"28.0\n",
"30.0\n",
"50.0\n"
]
}
],
"source": [
"def calculate_fee(distance_travelled):\n",
" return 10 + 2 * distance_travelled\n",
"\n",
"for x in [1.0, 3.0, 5.0, 9.0, 10.0, 20.0]:\n",
" print(calculate_fee(x))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"接下来,我们把问题稍微变换一下,现在我们知道乘客每次乘坐出租车的公里数,也知道乘客每次下车的时候支付给出租车司机的总费用。但是并不知道乘车的起步价,以及每公里行驶费用是多少。我们希望让机器从这些数据当中学习出来计算总费用的规则。\n",
"\n",
"更具体的,我们想要让机器学习程序通过数据学习出来下面的公式当中的参数w和参数b(这是一个非常简单的示例,所以`w`和`b`都是浮点数,随着对深度学习了解的深入,你将会知道`w`和`b`通常情况下会是矩阵和向量)。这样,当下次乘车的时候,我们知道了行驶里程`distance_travelled`的时候,我们就可以估算出来用户的总费用`total_fee`了。\n",
"\n",
"```\n",
"total_fee = w * distance_travelled + b\n",
"```\n",
"\n",
"接下来,我们看看用飞桨如何实现这个hello, world级别的机器学习程序。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 导入飞桨\n",
"\n",
"为了能够使用飞桨,我们需要先用python的`import`语句导入飞桨`paddle`。\n",
"同时,为了能够更好的对数组进行计算和处理,我们也还需要导入`numpy`。\n",
"\n",
"如果你是在本机运行这个notebook,而且还没有安装飞桨,可以去飞桨的官网查看如何安装:[飞桨官网](https://www.paddlepaddle.org.cn/)。并且请使用2.0beta或以上版本的飞桨。"
]
},
{
"cell_type": "code",
"execution_count": 25,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"paddle version 0.0.0\n"
]
}
],
"source": [
"import paddle\n",
"paddle.disable_static()\n",
"print(\"paddle version \" + paddle.__version__)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备数据\n",
"\n",
"在这个机器学习任务中,我们已经知道了乘客的行驶里程`distance_travelled`,和对应的,这些乘客的总费用`total_fee`。\n",
"通常情况下,在机器学习任务中,像`distance_travelled`这样的输入值,一般被称为`x`(或者特征`feature`),像`total_fee`这样的输出值,一般被称为`y`(或者标签`label`)。\n",
"\n",
"我们用`paddle.to_tensor`把示例数据转换为paddle的Tensor数据。"
]
},
{
"cell_type": "code",
"execution_count": 26,
"metadata": {},
"outputs": [],
"source": [
"x_data = paddle.to_tensor([[1.], [3.0], [5.0], [9.0], [10.0], [20.0]])\n",
"y_data = paddle.to_tensor([[12.], [16.0], [20.0], [28.0], [30.0], [50.0]])"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 用飞桨定义模型的计算\n",
"\n",
"使用飞桨定义模型的计算的过程,本质上,是我们用python,通过飞桨提供的API,来告诉飞桨我们的计算规则的过程。回顾一下,我们想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的`w`和`b`。这样在未来,给定`x`时就可以估算出来`y`值(估算出来的`y`记为`y_predict`)\n",
"\n",
"```\n",
"y_predict = w * x + b\n",
"```\n",
"\n",
"我们将会用飞桨的线性变换层:`paddle.nn.Linear`来实现这个计算过程,这个公式里的变量`x, y, w, b, y_predict`,对应着飞桨里面的[Tensor概念](https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/tensor.html)。\n",
"\n",
"### 稍微补充一下\n",
"\n",
"在这里的示例中,我们根据经验,已经事先知道了`distance_travelled`和`total_fee`之间是线性的关系,而在更实际的问题当中,`x`和`y`的关系通常是非线性的,因此也就需要使用更多类型,也更复杂的神经网络。(比如,BMI指数跟你的身高就不是线性关系,一张图片里的某个像素值跟这个图片是猫还是狗也不是线性关系。)\n"
]
},
{
"cell_type": "code",
"execution_count": 27,
"metadata": {},
"outputs": [],
"source": [
"linear = paddle.nn.Linear(in_features=1, out_features=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 准备好运行飞桨\n",
"\n",
"机器(计算机)在一开始的时候会随便猜`w`和`b`,我们先看看机器猜的怎么样。你应该可以看到,这时候的`w`是一个随机值,`b`是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w before optimize: -1.7107375860214233\n",
"b before optimize: 0.0\n"
]
}
],
"source": [
"w_before_opt = linear.weight.numpy().item()\n",
"b_before_opt = linear.bias.numpy().item()\n",
"\n",
"print(\"w before optimize: {}\".format(w_before_opt))\n",
"print(\"b before optimize: {}\".format(b_before_opt))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 告诉飞桨怎么样学习\n",
"\n",
"前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去**学习**,从而能得到参数`w`和`b`。\n",
"\n",
"这个过程简单的来陈述一下,你应该就会大致明白了(尽管背后的理论和知识还需要逐步的去学习)。在机器学习/深度学习当中,机器(计算机)在最开始的时候,得到参数`w`和`b`的方式是随便猜一下,用这种随便猜测得到的参数值,去进行计算(预测)的时候,得到的`y_predict`,跟实际的`y`值一定是有**差距**的。接下来,机器会根据这个差距来**调整`w`和`b`**,随着这样的逐步的调整,`w`和`b`会越来越正确,`y_predict`跟`y`之间的差距也会越来越小,从而最终能得到好用的`w`和`b`。这个过程就是机器**学习**的过程。\n",
"\n",
"用更加技术的语言来说,衡量**差距**的函数(一个公式)就是损失函数,用来**调整**参数的方法就是优化算法。\n",
"\n",
"在本示例当中,我们用最简单的均方误差(mean square error)作为损失函数(`paddle.nn.MSELoss`);和最常见的优化算法SGD(stocastic gradient descent)作为优化算法(传给`paddle.optimizer.SGD`的参数`learning_rate`,你可以理解为控制每次调整的步子大小的参数)。"
]
},
{
"cell_type": "code",
"execution_count": 29,
"metadata": {},
"outputs": [],
"source": [
"mse_loss = paddle.nn.MSELoss()\n",
"sgd_optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters = linear.parameters())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 运行优化算法\n",
"\n",
"接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量`y`和`y_predict`的差距的`loss`)在不断的降低。"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"epoch 0 loss [2107.3943]\n",
"epoch 1000 loss [7.8432994]\n",
"epoch 2000 loss [1.7537074]\n",
"epoch 3000 loss [0.39211753]\n",
"epoch 4000 loss [0.08767726]\n",
"finished training, loss [0.01963376]\n"
]
}
],
"source": [
"total_epoch = 5000\n",
"for i in range(total_epoch):\n",
" y_predict = linear(x_data)\n",
" loss = mse_loss(y_predict, y_data)\n",
" loss.backward()\n",
" sgd_optimizer.minimize(loss)\n",
" linear.clear_gradients()\n",
" \n",
" if i%1000 == 0:\n",
" print(\"epoch {} loss {}\".format(i, loss.numpy()))\n",
" \n",
"print(\"finished training, loss {}\".format(loss.numpy()))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 机器学习出来的参数\n",
"\n",
"经过了这样的对参数`w`和`b`的调整(**学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现`w`变成了很接近2.0的一个值,`b`变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)"
]
},
{
"cell_type": "code",
"execution_count": 31,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"w after optimize: 2.017843246459961\n",
"b after optimize: 9.771851539611816\n"
]
}
],
"source": [
"w_after_opt = linear.weight.numpy().item()\n",
"b_after_opt = linear.bias.numpy().item()\n",
"\n",
"print(\"w after optimize: {}\".format(w_after_opt))\n",
"print(\"b after optimize: {}\".format(b_after_opt))\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# hello paddle\n",
"\n",
"通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。"
]
},
{
"cell_type": "code",
"execution_count": 32,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"hello paddle\n"
]
}
],
"source": [
"print(\"hello paddle\")"
]
}
],
"metadata": {
"colab": {
"name": "hello-paddle.ipynb",
"private_outputs": true,
"provenance": [],
"toc_visible": true
},
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.7"
}
},
"nbformat": 4,
"nbformat_minor": 1
}
Hello Paddle: 从普通程序走向机器学习程序
========================================
这篇示例向你介绍普通的程序跟机器学习程序的区别,并带着你用飞桨框架,实现你的第一个机器学习程序。
普通程序跟机器学习程序的逻辑区别
--------------------------------
作为一名开发者,你最熟悉的开始学习一门编程语言,或者一个深度学习框架的方式,可能是通过一个hello,
world程序。
学习飞桨也可以这样,这篇小示例教程将会通过一个非常简单的示例来向你展示如何开始使用飞桨。
机器学习程序跟通常的程序最大的不同是,通常的程序是在给定输入的情况下,通过告诉计算机处理数据的规则,然后得到处理后的结果。而机器学习程序则是在并不知道这些规则的情况下,让机器来从数据当中\ **学习**\ 出来规则。
作为热身,我们先来看看通常的程序所做的事情。
我们现在面临这样一个任务:
我们乘坐出租车的时候,会有一个10元的起步价,只要上车就需要收取。出租车每行驶1公里,需要再支付每公里2元的行驶费用。当一个乘客坐完出租车之后,车上的计价器需要算出来该乘客需要支付的乘车费用。
如果用python来实现该功能,会如下所示:
.. code:: ipython3
def calculate_fee(distance_travelled):
return 10 + 2 * distance_travelled
for x in [1.0, 3.0, 5.0, 9.0, 10.0, 20.0]:
print(calculate_fee(x))
.. parsed-literal::
12.0
16.0
20.0
28.0
30.0
50.0
接下来,我们把问题稍微变换一下,现在我们知道乘客每次乘坐出租车的公里数,也知道乘客每次下车的时候支付给出租车司机的总费用。但是并不知道乘车的起步价,以及每公里行驶费用是多少。我们希望让机器从这些数据当中学习出来计算总费用的规则。
更具体的,我们想要让机器学习程序通过数据学习出来下面的公式当中的参数w和参数b(这是一个非常简单的示例,所以\ ``w``\ 和\ ``b``\ 都是浮点数,随着对深度学习了解的深入,你将会知道\ ``w``\ 和\ ``b``\ 通常情况下会是矩阵和向量)。这样,当下次乘车的时候,我们知道了行驶里程\ ``distance_travelled``\ 的时候,我们就可以估算出来用户的总费用\ ``total_fee``\ 了。
::
total_fee = w * distance_travelled + b
接下来,我们看看用飞桨如何实现这个hello, world级别的机器学习程序。
导入飞桨
---------
为了能够使用飞桨,我们需要先用python的\ ``import``\ 语句导入飞桨\ ``paddle``\ 。
同时,为了能够更好的对数组进行计算和处理,我们也还需要导入\ ``numpy``\ 。
如果你是在本机运行这个notebook,而且还没有安装飞桨,可以去飞桨的官网查看如何安装:\ `飞桨官网 <https://www.paddlepaddle.org.cn/>`__\ 。并且请使用2.0beta或以上版本的飞桨。
.. code:: ipython3
import paddle
paddle.disable_static()
print("paddle version " + paddle.__version__)
.. parsed-literal::
paddle version 0.0.0
准备数据
---------
在这个机器学习任务中,我们已经知道了乘客的行驶里程\ ``distance_travelled``\ ,和对应的,这些乘客的总费用\ ``total_fee``\ 。
通常情况下,在机器学习任务中,像\ ``distance_travelled``\ 这样的输入值,一般被称为\ ``x``\ (或者特征\ ``feature``\ ),像\ ``total_fee``\ 这样的输出值,一般被称为\ ``y``\ (或者标签\ ``label``)。
我们用\ ``paddle.to_tensor``\ 把示例数据转换为paddle的Tensor数据。
.. code:: ipython3
x_data = paddle.to_tensor([[1.], [3.0], [5.0], [9.0], [10.0], [20.0]])
y_data = paddle.to_tensor([[12.], [16.0], [20.0], [28.0], [30.0], [50.0]])
用飞桨定义模型的计算
--------------------
使用飞桨定义模型的计算的过程,本质上,是我们用python,通过飞桨提供的API,来告诉飞桨我们的计算规则的过程。回顾一下,我们想要通过飞桨用机器学习方法,从数据当中学习出来如下公式当中的\ ``w``\ 和\ ``b``\ 。这样在未来,给定\ ``x``\ 时就可以估算出来\ ``y``\ 值(估算出来的\ ``y``\ 记为\ ``y_predict``\ )
::
y_predict = w * x + b
我们将会用飞桨的线性变换层:\ ``paddle.nn.Linear``\ 来实现这个计算过程,这个公式里的变量\ ``x, y, w, b, y_predict``\ ,对应着飞桨里面的\ `Tensor概念 <https://www.paddlepaddle.org.cn/documentation/docs/zh/beginners_guide/basic_concept/tensor.html>`__\ 。
稍微补充一下
~~~~~~~~~~~~
在这里的示例中,我们根据经验,已经事先知道了\ ``distance_travelled``\ 和\ ``total_fee``\ 之间是线性的关系,而在更实际的问题当中,\ ``x``\ 和\ ``y``\ 的关系通常是非线性的,因此也就需要使用更多类型,也更复杂的神经网络。(比如,BMI指数跟你的身高就不是线性关系,一张图片里的某个像素值跟这个图片是猫还是狗也不是线性关系。)
.. code:: ipython3
linear = paddle.nn.Linear(in_features=1, out_features=1)
准备好运行飞桨
----------------
机器(计算机)在一开始的时候会随便猜\ ``w``\ 和\ ``b``\ ,我们先看看机器猜的怎么样。你应该可以看到,这时候的\ ``w``\ 是一个随机值,\ ``b``\ 是0.0,这是飞桨的初始化策略,也是这个领域常用的初始化策略。(如果你愿意,也可以采用其他的初始化的方式,今后你也会看到,选择不同的初始化策略也是对于做好深度学习任务来说很重要的一点)。
.. code:: ipython3
w_before_opt = linear.weight.numpy().item()
b_before_opt = linear.bias.numpy().item()
print("w before optimize: {}".format(w_before_opt))
print("b before optimize: {}".format(b_before_opt))
.. parsed-literal::
w before optimize: -1.7107375860214233
b before optimize: 0.0
告诉飞桨怎么样学习
--------------------
前面我们定义好了神经网络(尽管是一个最简单的神经网络),我们还需要告诉飞桨,怎么样去\ **学习**\ ,从而能得到参数\ ``w``\ 和\ ``b``\ 。
这个过程简单的来陈述一下,你应该就会大致明白了(尽管背后的理论和知识还需要逐步的去学习)。在机器学习/深度学习当中,机器(计算机)在最开始的时候,得到参数\ ``w``\ 和\ ``b``\ 的方式是随便猜一下,用这种随便猜测得到的参数值,去进行计算(预测)的时候,得到的\ ``y_predict``\ ,跟实际的\ ``y``\ 值一定是有\ **差距**\ 的。接下来,机器会根据这个差距来\ **调整\ ``w``\ 和\ ``b``**\ ,随着这样的逐步的调整,\ ``w``\ 和\ ``b``\ 会越来越正确,\ ``y_predict``\ 跟\ ``y``\ 之间的差距也会越来越小,从而最终能得到好用的\ ``w``\ 和\ ``b``\ 。这个过程就是机器\ **学习**\ 的过程。
用更加技术的语言来说,衡量\ **差距**\ 的函数(一个公式)就是损失函数,用来\ **调整**\ 参数的方法就是优化算法。
在本示例当中,我们用最简单的均方误差(mean square
error)作为损失函数(``paddle.nn.MSELoss``);和最常见的优化算法SGD(stocastic
gradient
descent)作为优化算法(传给\ ``paddle.optimizer.SGD``\ 的参数\ ``learning_rate``\ ,你可以理解为控制每次调整的步子大小的参数)。
.. code:: ipython3
mse_loss = paddle.nn.MSELoss()
sgd_optimizer = paddle.optimizer.SGD(learning_rate=0.001, parameters = linear.parameters())
运行优化算法
---------------
接下来,我们让飞桨运行一下这个优化算法,这会是一个前面介绍过的逐步调整参数的过程,你应该可以看到loss值(衡量\ ``y``\ 和\ ``y_predict``\ 的差距的\ ``loss``)在不断的降低。
.. code:: ipython3
total_epoch = 5000
for i in range(total_epoch):
y_predict = linear(x_data)
loss = mse_loss(y_predict, y_data)
loss.backward()
sgd_optimizer.minimize(loss)
linear.clear_gradients()
if i%1000 == 0:
print("epoch {} loss {}".format(i, loss.numpy()))
print("finished training, loss {}".format(loss.numpy()))
.. parsed-literal::
epoch 0 loss [2107.3943]
epoch 1000 loss [7.8432994]
epoch 2000 loss [1.7537074]
epoch 3000 loss [0.39211753]
epoch 4000 loss [0.08767726]
finished training, loss [0.01963376]
机器学习出来的参数
-------------------
经过了这样的对参数\ ``w``\ 和\ ``b``\ 的调整(\ **学习**),我们再通过下面的程序,来看看现在的参数变成了多少。你应该会发现\ ``w``\ 变成了很接近2.0的一个值,\ ``b``\ 变成了接近10.0的一个值。虽然并不是正好的2和10,但却是从数据当中学习出来的还不错的模型的参数,可以在未来的时候,用从这批数据当中学习到的参数来预估了。(如果你愿意,也可以通过让机器多学习一段时间,从而得到更加接近2.0和10.0的参数值。)
.. code:: ipython3
w_after_opt = linear.weight.numpy().item()
b_after_opt = linear.bias.numpy().item()
print("w after optimize: {}".format(w_after_opt))
print("b after optimize: {}".format(b_after_opt))
.. parsed-literal::
w after optimize: 2.017843246459961
b after optimize: 9.771851539611816
hello paddle
---------------
通过这个小示例,希望你已经初步了解了飞桨,能在接下来随着对飞桨的更多学习,来解决实际遇到的问题。
.. code:: ipython3
print("hello paddle")
.. parsed-literal::
hello paddle
{
"metadata": {
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.4-final"
},
"orig_nbformat": 2,
"kernelspec": {
"name": "python37464bitc4da1ac836094043840bff631bedbf7f",
"display_name": "Python 3.7.4 64-bit"
}
},
"nbformat": 4,
"nbformat_minor": 2,
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 飞桨高层API使用指南\n",
"\n",
"## 1. 简介\n",
"\n",
"飞桨2.0全新推出高层API,是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性,并增强飞桨的功能。\n",
"\n",
"飞桨高层API面向从深度学习小白到资深开发者的所有人群,对于AI初学者来说,使用高层API可以简单快速的构建深度学习项目,对于资深开发者来说,可以快速完成算法迭代。\n",
"\n",
"飞桨高层API具有以下特点:\n",
"\n",
"* 易学易用: 高层API是对普通动态图API的进一步封装和优化,同时保持与普通API的兼容性,高层API使用更加易学易用,同样的实现使用高层API可以节省大量的代码。\n",
"* 低代码开发: 使用飞桨高层API的一个明显特点是,用户可编程代码量大大缩减。\n",
"* 动静转换: 高层API支持动静转换,用户只需要改一行代码即可实现将动态图代码在静态图模式下训练,既方便用户使用动态图调试模型,又提升了模型训练效率。\n",
"\n",
"在功能增强与使用方式上,高层API有以下升级:\n",
"\n",
"* 模型训练方式升级: 高层API中封装了Model类,继承了Model类的神经网络可以仅用几行代码完成模型的训练。\n",
"* 新增图像处理模块transform: 飞桨新增了图像预处理模块,其中包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法。\n",
"* 提供常用的神经网络模型可供调用: 高层API中集成了计算机视觉领域和自然语言处理领域常用模型,包括但不限于mobilenet、resnet、yolov3、cyclegan、bert、transformer、seq2seq等等。同时发布了对应模型的预训练模型,用户可以直接使用这些模型或者在此基础上完成二次开发。\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. 安装并使用飞桨高层API\n",
"\n",
"飞桨高层API无需独立安装,只需要安装好paddlepaddle即可,安装完成后import paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.vision、NLP领域paddle.text。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": "'0.0.0'"
},
"metadata": {},
"execution_count": 4
}
],
"source": [
"import paddle\n",
"import paddle.vision as vision\n",
"import paddle.text as text\n",
"\n",
"paddle.__version__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 2. 目录\n",
"\n",
"本指南教学内容覆盖\n",
"\n",
"* 使用高层API提供的自带数据集进行相关深度学习任务训练。\n",
"* 使用自定义数据进行数据集的定义、数据预处理和训练。\n",
"* 如何在数据集定义和加载中应用数据增强相关接口。\n",
"* 如何进行模型的组网。\n",
"* 高层API进行模型训练的相关API使用。\n",
"* 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。\n",
"* 如何使用多卡来加速训练。\n",
"\n",
"其他端到端的示例教程:\n",
"* TBD"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 3. 数据集定义、加载和数据预处理\n",
"\n",
"对于深度学习任务,均是框架针对各种类型数字的计算,是无法直接使用原始图片和文本等文件来完成。那么就是涉及到了一项动作,就是将原始的各种数据文件进行处理加工,转换成深度学习任务可以使用的数据。\n",
"\n",
"### 3.1 框架自带数据集使用\n",
"\n",
"高层API将一些我们常用到的数据集作为领域API对用户进行开放,对应API所在目录为`paddle.vision.datasets`,那么我们先看下提供了哪些数据集。"
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {
"tags": []
},
"outputs": [
{
"output_type": "execute_result",
"data": {
"text/plain": "['DatasetFolder',\n 'ImageFolder',\n 'MNIST',\n 'Flowers',\n 'Cifar10',\n 'Cifar100',\n 'VOC2012']"
},
"metadata": {},
"execution_count": 17
}
],
"source": [
"paddle.vision.datasets.__all__"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里我们是加载一个手写数字识别的数据集,用`mode`来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录`~/.cache/paddle/dataset`。"
]
},
{
"cell_type": "code",
"execution_count": 24,
"metadata": {},
"outputs": [],
"source": [
"# 测试数据集\n",
"train_dataset = vision.datasets.MNIST(mode='train')\n",
"\n",
"# 验证数据集\n",
"val_dataset = vision.datasets.MNIST(mode='test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.2 自定义数据集\n",
"\n",
"更多的时候我们是需要自己使用已有的相关数据来定义数据集,那么这里我们通过一个案例来了解如何进行数据集的定义,飞桨为用户提供了`paddle.io.Dataset`基类,让用户通过类的集成来快速实现数据集定义。"
]
},
{
"cell_type": "code",
"execution_count": 23,
"metadata": {
"tags": []
},
"outputs": [
{
"output_type": "stream",
"name": "stdout",
"text": "=============train dataset=============\ntraindata1 label1\ntraindata2 label2\ntraindata3 label3\ntraindata4 label4\n=============evaluation dataset=============\ntestdata1 label1\ntestdata2 label2\ntestdata3 label3\ntestdata4 label4\n"
}
],
"source": [
"from paddle.io import Dataset\n",
"\n",
"\n",
"class MyDataset(Dataset):\n",
" \"\"\"\n",
" 步骤一:继承paddle.io.Dataset类\n",
" \"\"\"\n",
" def __init__(self, mode='train'):\n",
" \"\"\"\n",
" 步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集\n",
" \"\"\"\n",
" super(MyDataset, self).__init__()\n",
"\n",
" if mode == 'train':\n",
" self.data = [\n",
" ['traindata1', 'label1'],\n",
" ['traindata2', 'label2'],\n",
" ['traindata3', 'label3'],\n",
" ['traindata4', 'label4'],\n",
" ]\n",
" else:\n",
" self.data = [\n",
" ['testdata1', 'label1'],\n",
" ['testdata2', 'label2'],\n",
" ['testdata3', 'label3'],\n",
" ['testdata4', 'label4'],\n",
" ]\n",
" \n",
" def __getitem__(self, index):\n",
" \"\"\"\n",
" 步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)\n",
" \"\"\"\n",
" data = self.data[index][0]\n",
" label = self.data[index][1]\n",
"\n",
" return data, label\n",
"\n",
" def __len__(self):\n",
" \"\"\"\n",
" 步骤四:实现__len__方法,返回数据集总数目\n",
" \"\"\"\n",
" return len(self.data)\n",
"\n",
"# 测试定义的数据集\n",
"train_dataset = MyDataset(mode='train')\n",
"val_dataset = MyDataset(mode='test')\n",
"\n",
"print('=============train dataset=============')\n",
"for data, label in train_dataset:\n",
" print(data, label)\n",
"\n",
"print('=============evaluation dataset=============')\n",
"for data, label in val_dataset:\n",
" print(data, label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 3.3 数据增强\n",
"\n",
"训练过程中有时会遇到过拟合的问题,其中一个解决方法就是对训练数据做增强,对数据进行处理得到不同的图像,从而泛化数据集。数据增强API是定义在领域目录的transofrms下,这里我们介绍两种使用方式,一种是基于框架自带数据集,一种是基于自己定义的数据集。\n",
"\n",
"#### 3.3.1 框架自带数据集"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from paddle.vision.transforms import Compose, Resize, ColorJitter\n",
"\n",
"\n",
"# 定义想要使用那些数据增强方式,这里用到了随机调整亮度、对比度和饱和度,改变图片大小\n",
"transform = Compose([ColorJitter(), Resize(size=100)])\n",
"\n",
"# 通过transform参数传递定义好的数据增项方法即可完成对自带数据集的应用\n",
"train_dataset = vision.datasets.MNIST(mode='train', transform=transform)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### 3.3.2 自定义数据集\n",
"\n",
"针对自定义数据集使用数据增强有两种方式,一种是在数据集的构造函数中进行数据增强方法的定义,之后对__getitem__中返回的数据进行应用。另外一种方式也可以给自定义的数据集类暴漏一个构造参数,在实例化类的时候将数据增强方法传递进去。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"from paddle.io import Dataset\n",
"\n",
"\n",
"class MyDataset(Dataset):\n",
" def __init__(self, mode='train'):\n",
" super(MyDataset, self).__init__()\n",
"\n",
" if mode == 'train':\n",
" self.data = [\n",
" ['traindata1', 'label1'],\n",
" ['traindata2', 'label2'],\n",
" ['traindata3', 'label3'],\n",
" ['traindata4', 'label4'],\n",
" ]\n",
" else:\n",
" self.data = [\n",
" ['testdata1', 'label1'],\n",
" ['testdata2', 'label2'],\n",
" ['testdata3', 'label3'],\n",
" ['testdata4', 'label4'],\n",
" ]\n",
"\n",
" # 定义要使用的数据预处理方法,针对图片的操作\n",
" self.transform = Compose([ColorJitter(), Resize(size=100)])\n",
" \n",
" def __getitem__(self, index):\n",
" data = self.data[index][0]\n",
"\n",
" # 在这里对训练数据进行应用\n",
" # 这里只是一个示例,测试时需要将数据集更换为图片数据进行测试\n",
" data = self.transform(data)\n",
"\n",
" label = self.data[index][1]\n",
"\n",
" return data, label\n",
"\n",
" def __len__(self):\n",
" return len(self.data)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 4. 模型组网\n",
"\n",
"针对高层API在模型组网上和基础API是统一的一套,无需投入额外的学习使用成本。那么这里我举几个简单的例子来做示例。\n",
"\n",
"### 4.1 Sequential组网\n",
"\n",
"针对顺序的线性网络结构我们可以直接使用Sequential来快速完成组网,可以减少类的定义等代码编写。"
]
},
{
"cell_type": "code",
"execution_count": 28,
"metadata": {},
"outputs": [],
"source": [
"# Sequential形式组网\n",
"mnist = paddle.nn.Sequential(\n",
" paddle.nn.Flatten(),\n",
" paddle.nn.Linear(784, 512),\n",
" paddle.nn.ReLU(),\n",
" paddle.nn.Dropout(0.2),\n",
" paddle.nn.Linear(512, 10)\n",
")"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.2 SubClass组网\n",
"针对一些比较复杂的网络结构,就可以使用Layer子类定义的方式来进行模型代码编写,在`__init__`构造函数中进行组网Layer的声明,在`forward`中使用声明的Layer变量进行前向计算。子类组网方式也可以实现sublayer的复用,针对相同的layer可以在构造函数中一次性定义,在forward中多次调用。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Layer类继承方式组网\n",
"class Mnist(paddle.nn.Layer):\n",
" def __init__(self):\n",
" super(Mnist, self).__init__()\n",
"\n",
" self.flatten = paddle.nn.Flatten()\n",
" self.linear_1 = paddle.nn.Linear(784, 512)\n",
" self.linear_2 = paddle.nn.Linear(512, 10)\n",
" self.relu = paddle.nn.ReLU()\n",
" self.dropout = paddle.nn.Dropout(0.2)\n",
"\n",
" def forward(self, inputs):\n",
" y = self.flatten(inputs)\n",
" y = self.linear_1(y)\n",
" y = self.relu(y)\n",
" y = self.dropout(y)\n",
" y = self.linear_2(y)\n",
"\n",
" return y\n",
"\n",
"mnist = Mnist()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.3 模型封装\n",
"\n",
"定义好网络结构之后我们来使用`paddle.Model`完成模型的封装,将网络结构组合成一个可快速使用高层API进行训练、评估和预测的类。\n",
"\n",
"在封装的时候我们有两种场景,动态图训练模式和静态图训练模式。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 场景1:动态图模式\n",
"\n",
"# 启动动态图训练模式\n",
"paddle.disable_static()\n",
"# 使用GPU训练\n",
"paddle.set_device('gpu')\n",
"# 模型封装\n",
"model = paddle.Model(mnist)\n",
"\n",
"\n",
"# 场景2:静态图模式\n",
"\n",
"# input = paddle.static.InputSpec([None, 1, 28, 28], dtype='float32')\n",
"# label = paddle.static.InputSpec([None, 1], dtype='int8')\n",
"# model = paddle.Model(mnist, input, label)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 4.4 模型可视化\n",
"\n",
"在组建好我们的网络结构后,一般我们会想去对我们的网络结构进行一下可视化,逐层的去对齐一下我们的网络结构参数,看看是否符合我们的预期。这里可以通过`Model.summary`接口进行可视化展示。\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"model.summary((1, 28, 28))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"另外,summary接口有两种使用方式,下面我们通过两个示例来做展示,除了`Model.summary`这种配套`paddle.Model`封装使用的接口外,还有一套配合没有经过`paddle.Model`封装的方式来使用。可以直接将实例化好的Layer子类放到`paddle.summary`接口中进行可视化呈现。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"paddle.summary(mnist, (1, 28, 28))"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"这里面有一个注意的点,有的用户可能会疑惑为什么要传递`(1, 28, 28)`这个input_size参数,因为在动态图中,网络定义阶段是还没有得到输入数据的形状信息,我们想要做网络结构的呈现就无从下手,那么我们通过告知接口网络结构的输入数据形状,这样网络可以通过逐层的计算推导得到完整的网络结构信息进行呈现。如果是动态图运行模式,那么就不需要给summary接口传递输入数据形状这个值了,因为在Model封装的时候我们已经定义好了InputSpec,其中包含了输入数据的形状格式。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 5. 模型训练\n",
"\n",
"使用`paddle.Model`封装成模型类后进行训练非常的简洁方便,我们可以直接通过调用`Model.fit`就可以完成训练过程。\n",
"\n",
"在使用`Model.fit`接口启动训练前,我们先通过`Model.prepare`接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。\n",
"\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 为模型训练做准备,设置优化器,损失函数和精度计算方式\n",
"model.prepare(paddle.optimizer.Adam(parameters=model.parameters()), \n",
" paddle.nn.CrossEntropyLoss(),\n",
" paddle.metric.Accuracy())"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"做好模型训练的前期准备工作后,我们正式调用`fit()`接口来启动训练过程,需要指定一下至少3个关键参数:训练数据集,训练轮次和单次训练数据批次大小。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式\n",
"model.fit(train_dataset, \n",
" epochs=10, \n",
" batch_size=32,\n",
" verbose=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.1 单机单卡\n",
"\n",
"我们把刚才单步教学的训练代码做一个整合,这个完整的代码示例就是我们的单机单卡训练程序。"
]
},
{
"cell_type": "code",
"execution_count": 30,
"metadata": {
"tags": []
},
"outputs": [],
"source": [
"# 启动动态图训练模式\n",
"paddle.disable_static()\n",
"\n",
"# 使用GPU训练\n",
"paddle.set_device('gpu')\n",
"\n",
"# 构建模型训练用的Model,告知需要训练哪个模型\n",
"model = paddle.Model(mnist)\n",
"\n",
"# 为模型训练做准备,设置优化器,损失函数和精度计算方式\n",
"model.prepare(paddle.optimizer.Adam(parameters=model.parameters()), \n",
" paddle.nn.CrossEntropyLoss(),\n",
" paddle.metric.Accuracy())\n",
"\n",
"# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式\n",
"model.fit(train_dataset, \n",
" epochs=10, \n",
" batch_size=32,\n",
" verbose=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 5.2 单机多卡\n",
"\n",
"对于高层API来实现单机多卡非常简单,整个训练代码和单机单卡没有差异。直接使用`paddle.distributed.launch`启动单机单卡的程序即可。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# train.py里面包含的就是单机单卡代码\n",
"python -m paddle.distributed.launch train.py"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 6. 模型评估\n",
"\n",
"对于训练好的模型进行评估操作可以使用`evaluate`接口来实现。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"result = model.evaluate(val_dataset, verbose=1)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 7. 模型预测\n",
"\n",
"高层API中提供`predict`接口,支持用户使用测试数据来完成模型的预测。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"pred_result = model.predict(val_dataset)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 8. 模型部署\n",
"\n",
"### 8.1 模型存储\n",
"\n",
"模型训练和验证达到我们的预期后,可以使用`save`接口来将我们的模型保存下来,用于后续模型的Fine-tuning或推理部署。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 保存用于推理部署的模型(training=False)\n",
"model.save('~/model/mnist', training=False)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### 8.2 预测部署\n",
"\n",
"有了用于推理部署的模型,就可以使用推理部署框架来完成预测服务部署,具体可以参见:[预测部署](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/index_cn.html), 包括服务端部署、移动端部署和模型压缩。"
]
}
]
}
\ No newline at end of file
飞桨高层API使用指南
===================
1. 简介
-------
飞桨2.0全新推出高层API,是对飞桨API的进一步封装与升级,提供了更加简洁易用的API,进一步提升了飞桨的易学易用性,并增强飞桨的功能。
飞桨高层API面向从深度学习小白到资深开发者的所有人群,对于AI初学者来说,使用高层API可以简单快速的构建深度学习项目,对于资深开发者来说,可以快速完成算法迭代。
飞桨高层API具有以下特点:
- 易学易用:
高层API是对普通动态图API的进一步封装和优化,同时保持与普通API的兼容性,高层API使用更加易学易用,同样的实现使用高层API可以节省大量的代码。
- 低代码开发:
使用飞桨高层API的一个明显特点是,用户可编程代码量大大缩减。
- 动静转换:
高层API支持动静转换,用户只需要改一行代码即可实现将动态图代码在静态图模式下训练,既方便用户使用动态图调试模型,又提升了模型训练效率。
在功能增强与使用方式上,高层API有以下升级:
- 模型训练方式升级:
高层API中封装了Model类,继承了Model类的神经网络可以仅用几行代码完成模型的训练。
- 新增图像处理模块transform:
飞桨新增了图像预处理模块,其中包含数十种数据处理函数,基本涵盖了常用的数据处理、数据增强方法。
- 提供常用的神经网络模型可供调用:
高层API中集成了计算机视觉领域和自然语言处理领域常用模型,包括但不限于mobilenetresnetyolov3cycleganberttransformerseq2seq等等。同时发布了对应模型的预训练模型,用户可以直接使用这些模型或者在此基础上完成二次开发。
2. 安装并使用飞桨高层API
------------------------
飞桨高层API无需独立安装,只需要安装好paddlepaddle即可,安装完成后import
paddle即可使用相关高层API,如:paddle.Model、视觉领域paddle.visionNLP领域paddle.text
.. code:: ipython3
import paddle
import paddle.vision as vision
import paddle.text as text
paddle.__version__
.. parsed-literal::
'0.0.0'
2. 目录
-------
本指南教学内容覆盖
- 使用高层API提供的自带数据集进行相关深度学习任务训练。
- 使用自定义数据进行数据集的定义、数据预处理和训练。
- 如何在数据集定义和加载中应用数据增强相关接口。
- 如何进行模型的组网。
- 高层API进行模型训练的相关API使用。
- 如何在fit接口满足需求的时候进行自定义,使用基础API来完成训练。
- 如何使用多卡来加速训练。
其他端到端的示例教程: \* TBD
3. 数据集定义、加载和数据预处理
-------------------------------
对于深度学习任务,均是框架针对各种类型数字的计算,是无法直接使用原始图片和文本等文件来完成。那么就是涉及到了一项动作,就是将原始的各种数据文件进行处理加工,转换成深度学习任务可以使用的数据。
3.1 框架自带数据集使用
~~~~~~~~~~~~~~~~~~~~~~
高层API将一些我们常用到的数据集作为领域API对用户进行开放,对应API所在目录为\ ``paddle.vision.datasets``\ ,那么我们先看下提供了哪些数据集。
.. code:: ipython3
paddle.vision.datasets.__all__
.. parsed-literal::
['DatasetFolder',
'ImageFolder',
'MNIST',
'Flowers',
'Cifar10',
'Cifar100',
'VOC2012']
这里我们是加载一个手写数字识别的数据集,用\ ``mode``\ 来标识是训练数据还是测试数据集。数据集接口会自动从远端下载数据集到本机缓存目录\ ``~/.cache/paddle/dataset``\
.. code:: ipython3
# 测试数据集
train_dataset = vision.datasets.MNIST(mode='train')
# 验证数据集
val_dataset = vision.datasets.MNIST(mode='test')
3.2 自定义数据集
~~~~~~~~~~~~~~~~
更多的时候我们是需要自己使用已有的相关数据来定义数据集,那么这里我们通过一个案例来了解如何进行数据集的定义,飞桨为用户提供了\ ``paddle.io.Dataset``\ 基类,让用户通过类的集成来快速实现数据集定义。
.. code:: ipython3
from paddle.io import Dataset
class MyDataset(Dataset):
"""
步骤一:继承paddle.io.Dataset类
"""
def __init__(self, mode='train'):
"""
步骤二:实现构造函数,定义数据读取方式,划分训练和测试数据集
"""
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
def __getitem__(self, index):
"""
步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
"""
data = self.data[index][0]
label = self.data[index][1]
return data, label
def __len__(self):
"""
步骤四:实现__len__方法,返回数据集总数目
"""
return len(self.data)
# 测试定义的数据集
train_dataset = MyDataset(mode='train')
val_dataset = MyDataset(mode='test')
print('=============train dataset=============')
for data, label in train_dataset:
print(data, label)
print('=============evaluation dataset=============')
for data, label in val_dataset:
print(data, label)
.. parsed-literal::
=============train dataset=============
traindata1 label1
traindata2 label2
traindata3 label3
traindata4 label4
=============evaluation dataset=============
testdata1 label1
testdata2 label2
testdata3 label3
testdata4 label4
3.3 数据增强
~~~~~~~~~~~~
训练过程中有时会遇到过拟合的问题,其中一个解决方法就是对训练数据做增强,对数据进行处理得到不同的图像,从而泛化数据集。数据增强API是定义在领域目录的transofrms下,这里我们介绍两种使用方式,一种是基于框架自带数据集,一种是基于自己定义的数据集。
3.3.1 框架自带数据集
^^^^^^^^^^^^^^^^^^^^
.. code:: ipython3
from paddle.vision.transforms import Compose, Resize, ColorJitter
# 定义想要使用那些数据增强方式,这里用到了随机调整亮度、对比度和饱和度,改变图片大小
transform = Compose([ColorJitter(), Resize(size=100)])
# 通过transform参数传递定义好的数据增项方法即可完成对自带数据集的应用
train_dataset = vision.datasets.MNIST(mode='train', transform=transform)
3.3.2 自定义数据集
^^^^^^^^^^^^^^^^^^
针对自定义数据集使用数据增强有两种方式,一种是在数据集的构造函数中进行数据增强方法的定义,之后对__getitem__中返回的数据进行应用。另外一种方式也可以给自定义的数据集类暴漏一个构造参数,在实例化类的时候将数据增强方法传递进去。
.. code:: ipython3
from paddle.io import Dataset
class MyDataset(Dataset):
def __init__(self, mode='train'):
super(MyDataset, self).__init__()
if mode == 'train':
self.data = [
['traindata1', 'label1'],
['traindata2', 'label2'],
['traindata3', 'label3'],
['traindata4', 'label4'],
]
else:
self.data = [
['testdata1', 'label1'],
['testdata2', 'label2'],
['testdata3', 'label3'],
['testdata4', 'label4'],
]
# 定义要使用的数据预处理方法,针对图片的操作
self.transform = Compose([ColorJitter(), Resize(size=100)])
def __getitem__(self, index):
data = self.data[index][0]
# 在这里对训练数据进行应用
# 这里只是一个示例,测试时需要将数据集更换为图片数据进行测试
data = self.transform(data)
label = self.data[index][1]
return data, label
def __len__(self):
return len(self.data)
4. 模型组网
-----------
针对高层API在模型组网上和基础API是统一的一套,无需投入额外的学习使用成本。那么这里我举几个简单的例子来做示例。
4.1 Sequential组网
~~~~~~~~~~~~~~~~~~
针对顺序的线性网络结构我们可以直接使用Sequential来快速完成组网,可以减少类的定义等代码编写。
.. code:: ipython3
# Sequential形式组网
mnist = paddle.nn.Sequential(
paddle.nn.Flatten(),
paddle.nn.Linear(784, 512),
paddle.nn.ReLU(),
paddle.nn.Dropout(0.2),
paddle.nn.Linear(512, 10)
)
4.2 SubClass组网
~~~~~~~~~~~~~~~~
针对一些比较复杂的网络结构,就可以使用Layer子类定义的方式来进行模型代码编写,在\ ``__init__``\ 构造函数中进行组网Layer的声明,在\ ``forward``\ 中使用声明的Layer变量进行前向计算。子类组网方式也可以实现sublayer的复用,针对相同的layer可以在构造函数中一次性定义,在forward中多次调用。
.. code:: ipython3
# Layer类继承方式组网
class Mnist(paddle.nn.Layer):
def __init__(self):
super(Mnist, self).__init__()
self.flatten = paddle.nn.Flatten()
self.linear_1 = paddle.nn.Linear(784, 512)
self.linear_2 = paddle.nn.Linear(512, 10)
self.relu = paddle.nn.ReLU()
self.dropout = paddle.nn.Dropout(0.2)
def forward(self, inputs):
y = self.flatten(inputs)
y = self.linear_1(y)
y = self.relu(y)
y = self.dropout(y)
y = self.linear_2(y)
return y
mnist = Mnist()
4.3 模型封装
~~~~~~~~~~~~
定义好网络结构之后我们来使用\ ``paddle.Model``\ 完成模型的封装,将网络结构组合成一个可快速使用高层API进行训练、评估和预测的类。
在封装的时候我们有两种场景,动态图训练模式和静态图训练模式。
.. code:: ipython3
# 场景1:动态图模式
# 启动动态图训练模式
paddle.disable_static()
# 使用GPU训练
paddle.set_device('gpu')
# 模型封装
model = paddle.Model(mnist)
# 场景2:静态图模式
# input = paddle.static.InputSpec([None, 1, 28, 28], dtype='float32')
# label = paddle.static.InputSpec([None, 1], dtype='int8')
# model = paddle.Model(mnist, input, label)
4.4 模型可视化
~~~~~~~~~~~~~~
在组建好我们的网络结构后,一般我们会想去对我们的网络结构进行一下可视化,逐层的去对齐一下我们的网络结构参数,看看是否符合我们的预期。这里可以通过\ ``Model.summary``\ 接口进行可视化展示。
.. code:: ipython3
model.summary((1, 28, 28))
另外,summary接口有两种使用方式,下面我们通过两个示例来做展示,除了\ ``Model.summary``\ 这种配套\ ``paddle.Model``\ 封装使用的接口外,还有一套配合没有经过\ ``paddle.Model``\ 封装的方式来使用。可以直接将实例化好的Layer子类放到\ ``paddle.summary``\ 接口中进行可视化呈现。
.. code:: ipython3
paddle.summary(mnist, (1, 28, 28))
这里面有一个注意的点,有的用户可能会疑惑为什么要传递\ ``(1, 28, 28)``\ 这个input_size参数,因为在动态图中,网络定义阶段是还没有得到输入数据的形状信息,我们想要做网络结构的呈现就无从下手,那么我们通过告知接口网络结构的输入数据形状,这样网络可以通过逐层的计算推导得到完整的网络结构信息进行呈现。如果是动态图运行模式,那么就不需要给summary接口传递输入数据形状这个值了,因为在Model封装的时候我们已经定义好了InputSpec,其中包含了输入数据的形状格式。
5. 模型训练
-----------
使用\ ``paddle.Model``\ 封装成模型类后进行训练非常的简洁方便,我们可以直接通过调用\ ``Model.fit``\ 就可以完成训练过程。
在使用\ ``Model.fit``\ 接口启动训练前,我们先通过\ ``Model.prepare``\ 接口来对训练进行提前的配置准备工作,包括设置模型优化器,Loss计算方法,精度计算方法等。
.. code:: ipython3
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
做好模型训练的前期准备工作后,我们正式调用\ ``fit()``\ 接口来启动训练过程,需要指定一下至少3个关键参数:训练数据集,训练轮次和单次训练数据批次大小。
.. code:: ipython3
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=10,
batch_size=32,
verbose=1)
5.1 单机单卡
~~~~~~~~~~~~
我们把刚才单步教学的训练代码做一个整合,这个完整的代码示例就是我们的单机单卡训练程序。
.. code:: ipython3
# 启动动态图训练模式
paddle.disable_static()
# 使用GPU训练
paddle.set_device('gpu')
# 构建模型训练用的Model,告知需要训练哪个模型
model = paddle.Model(mnist)
# 为模型训练做准备,设置优化器,损失函数和精度计算方式
model.prepare(paddle.optimizer.Adam(parameters=model.parameters()),
paddle.nn.CrossEntropyLoss(),
paddle.metric.Accuracy())
# 启动模型训练,指定训练数据集,设置训练轮次,设置每次数据集计算的批次大小,设置日志格式
model.fit(train_dataset,
epochs=10,
batch_size=32,
verbose=1)
5.2 单机多卡
~~~~~~~~~~~~
对于高层API来实现单机多卡非常简单,整个训练代码和单机单卡没有差异。直接使用\ ``paddle.distributed.launch``\ 启动单机单卡的程序即可。
.. code:: ipython3
# train.py里面包含的就是单机单卡代码
python -m paddle.distributed.launch train.py
6. 模型评估
-----------
对于训练好的模型进行评估操作可以使用\ ``evaluate``\ 接口来实现。
.. code:: ipython3
result = model.evaluate(val_dataset, verbose=1)
7. 模型预测
-----------
高层API中提供\ ``predict``\ 接口,支持用户使用测试数据来完成模型的预测。
.. code:: ipython3
pred_result = model.predict(val_dataset)
8. 模型部署
-----------
8.1 模型存储
~~~~~~~~~~~~
模型训练和验证达到我们的预期后,可以使用\ ``save``\ 接口来将我们的模型保存下来,用于后续模型的Fine-tuning或推理部署。
.. code:: ipython3
# 保存用于推理部署的模型(training=False
model.save('~/model/mnist', training=False)
8.2 预测部署
~~~~~~~~~~~~
有了用于推理部署的模型,就可以使用推理部署框架来完成预测服务部署,具体可以参见:\ `预测部署 <https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/index_cn.html>`__\
包括服务端部署、移动端部署和模型压缩。
################
快速上手
################
在这里PaddlePaddle为大家提供了一些简单的案例,快速上手paddle 2.0:
- `hello paddle <./hello_paddle/hello_paddle.html>`_ :简单介绍 Paddle,完成您的第一个Paddle项目。
- `Paddle 动态图 <./dynamic_graph/dynamic_graph.html>`_ :介绍使用 Paddle 动态图。
- `高层API快速上手 <./getting_started/getting_started.html>`_ :介绍Paddle高层API,快速完成模型搭建。
- `高层API详细介绍 <./high_level_api/high_level_api.html>`_ :详细介绍Paddle高层API。
- `模型加载与保存 <./save_model/save_model.html>`_ :介绍Paddle 模型的加载与保存。
- `线性回归 <./linear_regression/linear_regression.html>`_ :介绍使用 Paddle 实现线性回归任务。
.. toctree::
:hidden:
:titlesonly:
hello_paddle/hello_paddle.rst
dynamic_graph/dynamic_graph.rst
getting_started/getting_started.rst
high_level_api/high_level_api.rst
save_model/save_model.rst
linear_regression/linear_regression.rst
因为 它太大了无法显示 source diff 。你可以改为 查看blob
线性回归
========
NOTE:
本示例教程依然在开发中,目前是基于2.0beta版本(由于2.0beta没有正式发版,在用最新developwhl包下载的paddle)。
简要介绍
--------
经典的线性回归模型主要用来预测一些存在着线性关系的数据集。回归模型可以理解为:存在一个点集,用一条曲线去拟合它分布的过程。如果拟合曲线是一条直线,则称为线性回归。如果是一条二次曲线,则被称为二次回归。线性回归是回归模型中最简单的一种。
本示例简要介绍如何用飞桨开源框架,实现波士顿房价预测。其思路是,假设uci-housing数据集中的房子属性和房价之间的关系可以被属性间的线性组合描述。在模型训练阶段,让假设的预测结果和真实值之间的误差越来越小。在模型预测阶段,预测器会读取训练好的模型,对从未遇见过的房子属性进行房价预测。
环境设置
--------
本示例基于飞桨开源框架2.0版本。
.. code:: ipython3
import paddle
import numpy as np
import os
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
paddle.__version__
.. parsed-literal::
'0.0.0'
数据集
------
本示例采用uci-housing数据集,这是经典线性回归的数据集。数据集共506,每行14列。前13列用来描述房屋的各种信息,最后一列为该类房屋价格中位数。飞桨提供了读取uci_housing训练集和测试集的接口,分别为paddle.dataset.uci_housing.train()paddle.dataset.uci_housing.test()
13列用来描述房屋的各种信息
.. figure:: https://ai-studio-static-online.cdn.bcebos.com/c19602ce74284e3b9a50422f8dc37c0c1c79cf5cd8424994b6a6b073dcb7c057
:alt: avatar
avatar
下面我们来浏览一下数据是什么样子的:
.. code:: ipython3
import matplotlib.pyplot as plt
import matplotlib
train_data=paddle.dataset.uci_housing.train()
sample_data=next(train_data())
print(sample_data[0])
# 画图看特征间的关系,主要是变量两两之间的关系(线性或非线性,有无明显较为相关关系)
feature_names = ['CRIM', 'ZN', 'INDUS', 'CHAS', 'NOX', 'RM', 'AGE','DIS', 'RAD', 'TAX', 'PTRATIO', 'B', 'LSTAT', 'MEDV']
feature_num = len(feature_names)
features_np=np.array([x[0] for x in train_data()],np.float32)
labels_np=np.array([x[1] for x in train_data()],np.float32)
data_np=np.c_[features_np,labels_np]
df=pd.DataFrame(data_np,columns=feature_names)
matplotlib.use('TkAgg')
%matplotlib inline
sns.pairplot(df.dropna())
plt.show()
.. parsed-literal::
[-0.0405441 0.06636364 -0.32356227 -0.06916996 -0.03435197 0.05563625
-0.03475696 0.02682186 -0.37171335 -0.21419304 -0.33569506 0.10143217
-0.21172912]
.. image:: linear_regression_files/linear_regression_6_1.png
上图中,对角线上是各属性的直方图,非对角线上的是两个不同属性之间的相关图。
从图中我们可以看出,RM(每栋房平均客房数)、LSTAT(低收入人群占比)、与房价成明显的相关关系、NOX(一氧化碳浓度)和DIS(与波士顿就业中心距离)成明显相关关系等。
.. code:: ipython3
# 相关性分析
fig, ax = plt.subplots(figsize=(15,15))
ax=sns.heatmap(df.corr(), cbar=True, annot=True)
ax.set_ylim([14, 0])
plt.show()
.. image:: linear_regression_files/linear_regression_8_0.png
**数据归一化处理**
下图为大家展示各属性的取值范围分布:
.. code:: ipython3
sns.boxplot(data=df.iloc[:,0:13])
.. parsed-literal::
<matplotlib.axes._subplots.AxesSubplot at 0x1a3adcb410>
.. image:: linear_regression_files/linear_regression_11_1.png
做归一化(或 Feature scaling)至少有以下3个理由:
- 过大或过小的数值范围会导致计算时的浮点上溢或下溢。
- 不同的数值范围会导致不同属性对模型的重要性不同(至少在训练的初始阶段如此),而这个隐含的假设常常是不合理的。这会对优化的过程造成困难,使训练时间大大的加长.
- 很多的机器学习技巧/模型(例如L1L2正则项,向量空间模型-Vector Space
Model)都基于这样的假设:所有的属性取值都差不多是以0为均值且取值范围相近的。
.. code:: ipython3
features_max=[]
features_min=[]
features_avg=[]
for i in range(13):
i_feature_max=max([data[1][0][i] for data in enumerate(train_data())])
features_max.append(i_feature_max)
i_feature_min=min([data[1][0][i] for data in enumerate(train_data())])
features_min.append(i_feature_min)
i_feature_avg=sum([data[1][0][i] for data in enumerate(train_data())])/506
features_avg.append(i_feature_avg)
.. code:: ipython3
BATCH_SIZE=20
def feature_norm(input):
f_size=input.shape[0]
output_features=np.zeros((f_size,13),np.float32)
for batch_id in range(f_size):
for index in range(13):
output_features[batch_id][index]=(input[batch_id][index]-features_avg[index])/(features_max[index]-features_min[index])
return output_features
定义绘制训练过程的损失值变化趋势的方法draw_train_process
.. code:: ipython3
global iter
iter=0
iters=[]
train_costs=[]
def draw_train_process(iters,train_costs):
plt.title("training cost" ,fontsize=24)
plt.xlabel("iter", fontsize=14)
plt.ylabel("cost", fontsize=14)
plt.plot(iters, train_costs,color='red',label='training cost')
plt.show()
**数据提供器**
下面我们分别定义了用于训练和测试的数据提供器。提供器每次读入一个大小为BATCH_SIZE的数据批次。如果您希望加一些随机性,它可以同时定义一个批次大小和一个缓存大小。这样的话,每次数据提供器会从缓存中随机读取批次大小那么多的数据。
.. code:: ipython3
BATCH_SIZE=20
BUF_SIZE=500
train_reader=paddle.batch(paddle.reader.shuffle(paddle.dataset.uci_housing.train(),buf_size=BUF_SIZE),batch_size=BATCH_SIZE)
模型配置
--------
线性回归就是一个从输入到输出的简单的全连接层。
对于波士顿房价数据集,假设属性和房价之间的关系可以被属性间的线性组合描述。
.. code:: ipython3
class Regressor(paddle.nn.Layer):
def __init__(self):
super(Regressor,self).__init__()
self.fc=paddle.nn.Linear(13,1,None)
def forward(self,inputs):
pred=self.fc(inputs)
return pred
模型训练
---------
下面为大家展示模型训练的代码。
这里用到的是线性回归模型最常用的损失函数–均方误差(MSE),用来衡量模型预测的房价和真实房价的差异。
对损失函数进行优化所采用的方法是梯度下降法
.. code:: ipython3
y_preds=[]
labels_list=[]
def train(model):
print('start training ... ')
model.train()
EPOCH_NUM=500
optimizer=paddle.optimizer.SGD(learning_rate=0.001, parameters = model.parameters())
iter=0
for epoch_id in range(EPOCH_NUM):
train_cost=0
for batch_id,data in enumerate(train_reader()):
features_np=np.array([x[0] for x in data],np.float32)
labels_np=np.array([x[1] for x in data],np.float32)
features=paddle.to_variable(feature_norm(features_np))
labels=paddle.to_variable(labels_np)
#前向计算
y_pred=model(features)
cost=paddle.nn.functional.square_error_cost(y_pred,label=labels)
avg_cost=paddle.mean(cost)
train_cost = [avg_cost.numpy()]
#反向传播
avg_cost.backward()
#最小化loss,更新参数
opts=optimizer.minimize(avg_cost)
# 清除梯度
model.clear_gradients()
if batch_id%30==0 and epoch_id%30==0:
print("Pass:%d,Cost:%0.5f"%(epoch_id,train_cost[0][0]))
iter=iter+BATCH_SIZE
iters.append(iter)
train_costs.append(train_cost[0][0])
paddle.disable_static()
model = Regressor()
train(model)
.. parsed-literal::
start training ...
Pass:0,Cost:531.75244
Pass:30,Cost:61.10927
Pass:60,Cost:22.68571
Pass:90,Cost:34.80560
Pass:120,Cost:78.28358
Pass:150,Cost:124.95644
Pass:180,Cost:91.88014
Pass:210,Cost:15.23689
Pass:240,Cost:34.86035
Pass:270,Cost:54.76824
Pass:300,Cost:65.88247
Pass:330,Cost:41.25426
Pass:360,Cost:64.10200
Pass:390,Cost:77.11707
Pass:420,Cost:20.80456
Pass:450,Cost:29.80167
Pass:480,Cost:41.59278
.. code:: ipython3
matplotlib.use('TkAgg')
%matplotlib inline
draw_train_process(iters,train_costs)
.. image:: linear_regression_files/linear_regression_23_0.png
可以从上图看出,随着训练轮次的增加,损失在呈降低趋势。但由于每次仅基于少量样本更新参数和计算损失,所以损失下降曲线会出现震荡。
模型预测
----------
.. code:: ipython3
#获取预测数据
INFER_BATCH_SIZE=100
infer_reader=paddle.batch(paddle.dataset.uci_housing.test(),batch_size=INFER_BATCH_SIZE)
infer_data = next(infer_reader())
infer_features_np = np.array([data[0] for data in infer_data]).astype("float32")
infer_labels_np= np.array([data[1] for data in infer_data]).astype("float32")
infer_features=paddle.to_variable(feature_norm(infer_features_np))
infer_labels=paddle.to_variable(infer_labels_np)
fetch_list=model(infer_features).numpy()
sum_cost=0
for i in range(INFER_BATCH_SIZE):
infer_result=fetch_list[i][0]
ground_truth=infer_labels.numpy()[i]
if i%10==0:
print("No.%d: infer result is %.2f,ground truth is %.2f" % (i, infer_result,ground_truth))
cost=np.power(infer_result-ground_truth,2)
sum_cost+=cost
print("平均误差为:",sum_cost/INFER_BATCH_SIZE)
.. parsed-literal::
No.0: infer result is 12.20,ground truth is 8.50
No.10: infer result is 5.65,ground truth is 7.00
No.20: infer result is 14.87,ground truth is 11.70
No.30: infer result is 16.60,ground truth is 11.70
No.40: infer result is 13.71,ground truth is 10.80
No.50: infer result is 16.11,ground truth is 14.90
No.60: infer result is 18.78,ground truth is 21.40
No.70: infer result is 15.53,ground truth is 13.80
No.80: infer result is 18.10,ground truth is 20.60
No.90: infer result is 21.39,ground truth is 24.50
平均误差为: [12.917107]
.. code:: ipython3
def plot_pred_ground(pred, groud):
plt.figure()
plt.title("Predication v.s. Ground", fontsize=24)
plt.xlabel("groud price(unit:$1000)", fontsize=14)
plt.ylabel("predict price", fontsize=14)
plt.scatter(pred, groud, alpha=0.5) # scatter:散点图,alpha:"透明度"
plt.plot(groud, groud, c='red')
plt.show()
.. code:: ipython3
plot_pred_ground(fetch_list, infer_labels_np)
.. image:: linear_regression_files/linear_regression_28_0.png
上图可以看出,我们训练出来的模型的预测结果与真实结果是较为接近的。
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# 模型保存及加载\n",
"本教程将基于Paddle高阶API对模型参数的保存和加载进行讲解。在日常训练模型过程中我们会遇到一些突发情况,导致训练过程主动或被动的中断,因此在模型没有完全训练好的情况下,我们需要高频的保存下模型参数,在发生意外时可以快速载入保存的参数继续训练。抑或是模型已经训练好了,我们需要使用训练好的参数进行预测或部署模型上线。面对上述情况,Paddle中提供了保存模型和提取模型的方法,支持从上一次保存状态开始训练,只要我们随时保存训练过程中的模型状态,就不用从初始状态重新训练。\n",
"下面将基于手写数字识别的模型讲解paddle如何保存及加载模型,并恢复训练,网络结构部分的讲解省略。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 环境\n",
"本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop版本。"
]
},
{
"cell_type": "code",
"execution_count": 9,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"0.0.0\n"
]
}
],
"source": [
"import paddle\n",
"import paddle.nn.functional as F\n",
"from paddle.nn import Layer\n",
"from paddle.vision.datasets import MNIST\n",
"from paddle.metric import Accuracy\n",
"from paddle.nn import Conv2d,MaxPool2d,Linear\n",
"from paddle.static import InputSpec\n",
"\n",
"print(paddle.__version__)\n",
"paddle.disable_static()"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 数据集\n",
"手写数字的MNIST数据集,包含60,000个用于训练的示例和10,000个用于测试的示例。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28x28像素),其值为0到1。该数据集的官方地址为:http://yann.lecun.com/exdb/mnist/\n",
"本例中我们使用飞桨自带的mnist数据集。使用from paddle.vision.datasets import MNIST 引入即可。"
]
},
{
"cell_type": "code",
"execution_count": 4,
"metadata": {},
"outputs": [],
"source": [
"train_dataset = MNIST(mode='train')\n",
"test_dataset = MNIST(mode='test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 模型搭建"
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"class MyModel(Layer):\n",
" def __init__(self):\n",
" super(MyModel, self).__init__()\n",
" self.conv1 = paddle.nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)\n",
" self.max_pool1 = MaxPool2d(kernel_size=2, stride=2)\n",
" self.conv2 = Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1)\n",
" self.max_pool2 = MaxPool2d(kernel_size=2, stride=2)\n",
" self.linear1 = Linear(in_features=16*5*5, out_features=120)\n",
" self.linear2 = Linear(in_features=120, out_features=84)\n",
" self.linear3 = Linear(in_features=84, out_features=10)\n",
"\n",
" def forward(self, x):\n",
" x = self.conv1(x)\n",
" x = F.relu(x)\n",
" x = self.max_pool1(x)\n",
" x = F.relu(x)\n",
" x = self.conv2(x)\n",
" x = self.max_pool2(x)\n",
" x = paddle.flatten(x, start_axis=1, stop_axis=-1)\n",
" x = self.linear1(x)\n",
" x = F.relu(x)\n",
" x = self.linear2(x)\n",
" x = F.relu(x)\n",
" x = self.linear3(x)\n",
" x = F.softmax(x)\n",
" return x"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 模型训练\n",
"通过`Model` 构建实例,快速完成模型训练"
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/1\n",
"step 100/938 - loss: 1.6177 - acc_top1: 0.6119 - acc_top2: 0.6813 - 15ms/step\n",
"step 200/938 - loss: 1.7720 - acc_top1: 0.7230 - acc_top2: 0.7788 - 15ms/step\n",
"step 300/938 - loss: 1.6114 - acc_top1: 0.7666 - acc_top2: 0.8164 - 15ms/step\n",
"step 400/938 - loss: 1.6537 - acc_top1: 0.7890 - acc_top2: 0.8350 - 15ms/step\n",
"step 500/938 - loss: 1.5229 - acc_top1: 0.8170 - acc_top2: 0.8619 - 15ms/step\n",
"step 600/938 - loss: 1.5269 - acc_top1: 0.8391 - acc_top2: 0.8821 - 15ms/step\n",
"step 700/938 - loss: 1.4821 - acc_top1: 0.8561 - acc_top2: 0.8970 - 15ms/step\n",
"step 800/938 - loss: 1.4860 - acc_top1: 0.8689 - acc_top2: 0.9081 - 15ms/step\n",
"step 900/938 - loss: 1.5032 - acc_top1: 0.8799 - acc_top2: 0.9174 - 15ms/step\n",
"step 938/938 - loss: 1.4617 - acc_top1: 0.8835 - acc_top2: 0.9203 - 15ms/step\n",
"save checkpoint at /Users/dingjiawei/online_repo/book/paddle2.0_docs/save_model/mnist_checkpoint/0\n",
"Eval begin...\n",
"step 100/157 - loss: 1.4765 - acc_top1: 0.9636 - acc_top2: 0.9891 - 6ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9705 - acc_top2: 0.9910 - 6ms/step\n",
"Eval samples: 10000\n",
"save checkpoint at /Users/dingjiawei/online_repo/book/paddle2.0_docs/save_model/mnist_checkpoint/final\n"
]
}
],
"source": [
"inputs = InputSpec([None, 784], 'float32', 'x')\n",
"labels = InputSpec([None, 10], 'float32', 'x')\n",
"model = paddle.Model(MyModel(), inputs, labels)\n",
"\n",
"optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n",
"\n",
"model.prepare(\n",
" optim,\n",
" paddle.nn.loss.CrossEntropyLoss(),\n",
" Accuracy(topk=(1, 2))\n",
" )\n",
"model.fit(train_dataset,\n",
" test_dataset,\n",
" epochs=1,\n",
" log_freq=100,\n",
" batch_size=64,\n",
" save_dir='mnist_checkpoint')\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 保存模型参数\n",
"\n",
"目前Paddle框架有三种保存模型参数的体系,分别是:\n",
"#### paddle 高阶API-模型参数保存\n",
" * paddle.Model.fit\n",
" * paddle.Model.save\n",
"#### paddle 基础框架-动态图-模型参数保存 \n",
" * paddle.save\n",
"#### paddle 基础框架-静态图-模型参数保存 \n",
" * paddle.io.save\n",
" * paddle.io.save_inference_model\n",
"\n",
"下面将基于高阶API对模型保存与加载的方法进行讲解。"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"\n",
"#### 方法一:\n",
"* paddle.Model.fit(train_data, epochs, batch_size, save_dir, log_freq) <br><br>\n",
"在使用model.fit函数进行网络循环训练时,在save_dir参数中指定保存模型的路径,save_freq指定写入频率,即可同时实现模型的训练和保存。mode.fit()只能保存模型参数,不能保存优化器参数,每个epoch结束只会生成一个.pdparams文件。可以边训练边保存,每次epoch结束会实时生成一个.pdparams文件。 \n",
"\n",
"#### 方法二:\n",
"* paddle.Model.save(self, path, training=True) <br><br>\n",
"model.save(path)方法可以保存模型结构、网络参数和优化器参数,参数training=true的使用场景是在训练过程中,此时会保存网络参数和优化器参数。每个epoch生成两种文件 0.pdparams,0.pdopt,分别存储了模型参数和优化器参数,但是只会在整个模型训练完成后才会生成包含所有epoch参数的文件,path的格式为'dirname/file_prefix' 或 'file_prefix',其中dirname指定路径名称,file_prefix 指定参数文件的名称。当training=false的时候,代表已经训练结束,此时存储的是预测模型结构和网络参数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 方法一:训练过程中实时保存每个epoch的模型参数\n",
"model.fit(train_dataset,\n",
" test_dataset,\n",
" epochs=2,\n",
" batch_size=64,\n",
" save_dir='mnist_checkpoint'\n",
" )"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 方法二:model.save()保存模型和优化器参数信息\n",
"model.save('mnist_checkpoint/test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 加载模型参数\n",
"\n",
"当恢复训练状态时,需要加载模型数据,此时我们可以使用加载函数从存储模型状态和优化器状态的文件中载入模型参数和优化器参数,如果不需要恢复优化器,则不必使用优化器状态文件。\n",
"#### 高阶API-模型参数加载\n",
" * paddle.Model.load\n",
"#### paddle 基础框架-动态图-模型参数加载\n",
" * paddle.load\n",
"#### paddle 基础框架-静态图-模型参数加载\n",
" * paddle.io.load \n",
" * paddle.io.load_inference_model"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"下面将对高阶API的模型参数加载方法进行讲解\n",
"* model.load(self, path, skip_mismatch=False, reset_optimizer=False)<br><br>\n",
"model.load能够同时加载模型和优化器参数。通过reset_optimizer参数来指定是否需要恢复优化器参数,若reset_optimizer参数为True,则重新初始化优化器参数,若reset_optimizer参数为False,则从路径中恢复优化器参数。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# 高阶API加载模型\n",
"model.load('mnist_checkpoint/test')"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 恢复训练\n",
"\n",
"理想的恢复训练是模型状态回到训练中断的时刻,恢复训练之后的梯度更新走向是和恢复训练前的梯度走向完全相同的。基于此,我们可以通过恢复训练后的损失变化,判断上述方法是否能准确的恢复训练。即从epoch 0结束时保存的模型参数和优化器状态恢复训练,校验其后训练的损失变化(epoch 1)是否和不中断时的训练完全一致。\n",
"\n",
"说明:\n",
"\n",
"恢复训练有如下两个要点:\n",
"\n",
"* 保存模型时同时保存模型参数和优化器参数\n",
"\n",
"* 恢复参数时同时恢复模型参数和优化器参数。"
]
},
{
"cell_type": "code",
"execution_count": 12,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 1/2\n",
"step 100/938 - loss: 1.4635 - acc_top1: 0.9650 - acc_top2: 0.9898 - 15ms/step\n",
"step 200/938 - loss: 1.5459 - acc_top1: 0.9659 - acc_top2: 0.9897 - 15ms/step\n",
"step 300/938 - loss: 1.5109 - acc_top1: 0.9658 - acc_top2: 0.9893 - 15ms/step\n",
"step 400/938 - loss: 1.4797 - acc_top1: 0.9664 - acc_top2: 0.9899 - 15ms/step\n",
"step 500/938 - loss: 1.4786 - acc_top1: 0.9673 - acc_top2: 0.9902 - 15ms/step\n",
"step 600/938 - loss: 1.5082 - acc_top1: 0.9679 - acc_top2: 0.9906 - 15ms/step\n",
"step 700/938 - loss: 1.4768 - acc_top1: 0.9687 - acc_top2: 0.9909 - 15ms/step\n",
"step 800/938 - loss: 1.4638 - acc_top1: 0.9696 - acc_top2: 0.9913 - 15ms/step\n",
"step 900/938 - loss: 1.5058 - acc_top1: 0.9704 - acc_top2: 0.9916 - 15ms/step\n",
"step 938/938 - loss: 1.4702 - acc_top1: 0.9708 - acc_top2: 0.9917 - 15ms/step\n",
"Eval begin...\n",
"step 100/157 - loss: 1.4613 - acc_top1: 0.9755 - acc_top2: 0.9944 - 5ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9805 - acc_top2: 0.9956 - 5ms/step\n",
"Eval samples: 10000\n",
"Epoch 2/2\n",
"step 100/938 - loss: 1.4832 - acc_top1: 0.9789 - acc_top2: 0.9927 - 15ms/step\n",
"step 200/938 - loss: 1.4618 - acc_top1: 0.9779 - acc_top2: 0.9932 - 14ms/step\n",
"step 300/938 - loss: 1.4613 - acc_top1: 0.9779 - acc_top2: 0.9929 - 15ms/step\n",
"step 400/938 - loss: 1.4765 - acc_top1: 0.9772 - acc_top2: 0.9932 - 15ms/step\n",
"step 500/938 - loss: 1.4932 - acc_top1: 0.9775 - acc_top2: 0.9934 - 15ms/step\n",
"step 600/938 - loss: 1.4773 - acc_top1: 0.9773 - acc_top2: 0.9936 - 15ms/step\n",
"step 700/938 - loss: 1.4612 - acc_top1: 0.9783 - acc_top2: 0.9939 - 15ms/step\n",
"step 800/938 - loss: 1.4653 - acc_top1: 0.9779 - acc_top2: 0.9939 - 15ms/step\n",
"step 900/938 - loss: 1.4639 - acc_top1: 0.9780 - acc_top2: 0.9939 - 15ms/step\n",
"step 938/938 - loss: 1.4678 - acc_top1: 0.9779 - acc_top2: 0.9937 - 15ms/step\n",
"Eval begin...\n",
"step 100/157 - loss: 1.4612 - acc_top1: 0.9733 - acc_top2: 0.9945 - 6ms/step\n",
"step 157/157 - loss: 1.4612 - acc_top1: 0.9778 - acc_top2: 0.9952 - 6ms/step\n",
"Eval samples: 10000\n"
]
}
],
"source": [
"import paddle\n",
"from paddle.vision.datasets import MNIST\n",
"from paddle.metric import Accuracy\n",
"from paddle.static import InputSpec\n",
"#\n",
"#\n",
"train_dataset = MNIST(mode='train')\n",
"test_dataset = MNIST(mode='test')\n",
"\n",
"paddle.disable_static()\n",
"\n",
"inputs = InputSpec([None, 784], 'float32', 'x')\n",
"labels = InputSpec([None, 10], 'float32', 'x')\n",
"model = paddle.Model(MyModel(), inputs, labels)\n",
"optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())\n",
"model.load(\"./mnist_checkpoint/final\")\n",
"model.prepare( \n",
" optim,\n",
" paddle.nn.loss.CrossEntropyLoss(),\n",
" Accuracy(topk=(1, 2))\n",
" )\n",
"model.fit(train_data=train_dataset,\n",
" eval_data=test_dataset,\n",
" batch_size=64,\n",
" log_freq=100,\n",
" epochs=2\n",
" )"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"## 总结\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"以上就是用Mnist手写数字识别的例子对保存模型、加载模型、恢复训练进行讲解,Paddle提供了很多保存和加载的API方法,您可以根据自己的需求进行选择。"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.7.8"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
模型保存及加载
==============
本教程将基于Paddle高阶API对模型参数的保存和加载进行讲解。在日常训练模型过程中我们会遇到一些突发情况,导致训练过程主动或被动的中断,因此在模型没有完全训练好的情况下,我们需要高频的保存下模型参数,在发生意外时可以快速载入保存的参数继续训练。抑或是模型已经训练好了,我们需要使用训练好的参数进行预测或部署模型上线。面对上述情况,Paddle中提供了保存模型和提取模型的方法,支持从上一次保存状态开始训练,只要我们随时保存训练过程中的模型状态,就不用从初始状态重新训练。
下面将基于手写数字识别的模型讲解paddle如何保存及加载模型,并恢复训练,网络结构部分的讲解省略。
环境
----
本教程基于paddle-develop编写,如果您的环境不是本版本,请先安装paddle-develop版本。
.. code:: ipython3
import paddle
import paddle.nn.functional as F
from paddle.nn import Layer
from paddle.vision.datasets import MNIST
from paddle.metric import Accuracy
from paddle.nn import Conv2d,MaxPool2d,Linear
from paddle.static import InputSpec
print(paddle.__version__)
paddle.disable_static()
.. parsed-literal::
0.0.0
数据集
------
手写数字的MNIST数据集,包含60,000个用于训练的示例和10,000个用于测试的示例。这些数字已经过尺寸标准化并位于图像中心,图像是固定大小(28x28像素),其值为01。该数据集的官方地址为:http://yann.lecun.com/exdb/mnist/
本例中我们使用飞桨自带的mnist数据集。使用from paddle.vision.datasets
import MNIST 引入即可。
.. code:: ipython3
train_dataset = MNIST(mode='train')
test_dataset = MNIST(mode='test')
模型搭建
--------
.. code:: ipython3
class MyModel(Layer):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = paddle.nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5, stride=1, padding=2)
self.max_pool1 = MaxPool2d(kernel_size=2, stride=2)
self.conv2 = Conv2d(in_channels=6, out_channels=16, kernel_size=5, stride=1)
self.max_pool2 = MaxPool2d(kernel_size=2, stride=2)
self.linear1 = Linear(in_features=16*5*5, out_features=120)
self.linear2 = Linear(in_features=120, out_features=84)
self.linear3 = Linear(in_features=84, out_features=10)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.max_pool1(x)
x = F.relu(x)
x = self.conv2(x)
x = self.max_pool2(x)
x = paddle.flatten(x, start_axis=1, stop_axis=-1)
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
x = F.relu(x)
x = self.linear3(x)
x = F.softmax(x)
return x
模型训练
--------
通过\ ``Model`` 构建实例,快速完成模型训练
.. code:: ipython3
inputs = InputSpec([None, 784], 'float32', 'x')
labels = InputSpec([None, 10], 'float32', 'x')
model = paddle.Model(MyModel(), inputs, labels)
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
model.prepare(
optim,
paddle.nn.loss.CrossEntropyLoss(),
Accuracy(topk=(1, 2))
)
model.fit(train_dataset,
test_dataset,
epochs=1,
log_freq=100,
batch_size=64,
save_dir='mnist_checkpoint')
.. parsed-literal::
Epoch 1/1
step 100/938 - loss: 1.6177 - acc_top1: 0.6119 - acc_top2: 0.6813 - 15ms/step
step 200/938 - loss: 1.7720 - acc_top1: 0.7230 - acc_top2: 0.7788 - 15ms/step
step 300/938 - loss: 1.6114 - acc_top1: 0.7666 - acc_top2: 0.8164 - 15ms/step
step 400/938 - loss: 1.6537 - acc_top1: 0.7890 - acc_top2: 0.8350 - 15ms/step
step 500/938 - loss: 1.5229 - acc_top1: 0.8170 - acc_top2: 0.8619 - 15ms/step
step 600/938 - loss: 1.5269 - acc_top1: 0.8391 - acc_top2: 0.8821 - 15ms/step
step 700/938 - loss: 1.4821 - acc_top1: 0.8561 - acc_top2: 0.8970 - 15ms/step
step 800/938 - loss: 1.4860 - acc_top1: 0.8689 - acc_top2: 0.9081 - 15ms/step
step 900/938 - loss: 1.5032 - acc_top1: 0.8799 - acc_top2: 0.9174 - 15ms/step
step 938/938 - loss: 1.4617 - acc_top1: 0.8835 - acc_top2: 0.9203 - 15ms/step
save checkpoint at /Users/dingjiawei/online_repo/book/paddle2.0_docs/save_model/mnist_checkpoint/0
Eval begin...
step 100/157 - loss: 1.4765 - acc_top1: 0.9636 - acc_top2: 0.9891 - 6ms/step
step 157/157 - loss: 1.4612 - acc_top1: 0.9705 - acc_top2: 0.9910 - 6ms/step
Eval samples: 10000
save checkpoint at /Users/dingjiawei/online_repo/book/paddle2.0_docs/save_model/mnist_checkpoint/final
保存模型参数
------------
目前Paddle框架有三种保存模型参数的体系,分别是: #### paddle
高阶API-模型参数保存 \* paddle.Model.fit \* paddle.Model.save ####
paddle 基础框架-动态图-模型参数保存 \* paddle.save #### paddle
基础框架-静态图-模型参数保存 \* paddle.io.save \*
paddle.io.save_inference_model
下面将基于高阶API对模型保存与加载的方法进行讲解。
方法一:
^^^^^^^^
- paddle.Model.fit(train_data, epochs, batch_size, save_dir, log_freq)
在使用model.fit函数进行网络循环训练时,在save_dir参数中指定保存模型的路径,save_freq指定写入频率,即可同时实现模型的训练和保存。mode.fit()只能保存模型参数,不能保存优化器参数,每个epoch结束只会生成一个.pdparams文件。可以边训练边保存,每次epoch结束会实时生成一个.pdparams文件。
方法二:
^^^^^^^^
- paddle.Model.save(self, path, training=True)
model.save(path)方法可以保存模型结构、网络参数和优化器参数,参数training=true的使用场景是在训练过程中,此时会保存网络参数和优化器参数。每个epoch生成两种文件
0.pdparams,0.pdopt,分别存储了模型参数和优化器参数,但是只会在整个模型训练完成后才会生成包含所有epoch参数的文件,path的格式为’dirname/file_prefix
file_prefix’,其中dirname指定路径名称,file_prefix
指定参数文件的名称。当training=false的时候,代表已经训练结束,此时存储的是预测模型结构和网络参数。
.. code:: ipython3
# 方法一:训练过程中实时保存每个epoch的模型参数
model.fit(train_dataset,
test_dataset,
epochs=2,
batch_size=64,
save_dir='mnist_checkpoint'
)
.. code:: ipython3
# 方法二:model.save()保存模型和优化器参数信息
model.save('mnist_checkpoint/test')
加载模型参数
------------
当恢复训练状态时,需要加载模型数据,此时我们可以使用加载函数从存储模型状态和优化器状态的文件中载入模型参数和优化器参数,如果不需要恢复优化器,则不必使用优化器状态文件。
#### 高阶API-模型参数加载 \* paddle.Model.load #### paddle
基础框架-动态图-模型参数加载 \* paddle.load #### paddle
基础框架-静态图-模型参数加载 \* paddle.io.load \*
paddle.io.load_inference_model
下面将对高阶API的模型参数加载方法进行讲解 \* model.load(self, path,
skip_mismatch=False, reset_optimizer=False)
model.load能够同时加载模型和优化器参数。通过reset_optimizer参数来指定是否需要恢复优化器参数,若reset_optimizer参数为True,则重新初始化优化器参数,若reset_optimizer参数为False,则从路径中恢复优化器参数。
.. code:: ipython3
# 高阶API加载模型
model.load('mnist_checkpoint/test')
恢复训练
--------
理想的恢复训练是模型状态回到训练中断的时刻,恢复训练之后的梯度更新走向是和恢复训练前的梯度走向完全相同的。基于此,我们可以通过恢复训练后的损失变化,判断上述方法是否能准确的恢复训练。即从epoch
0结束时保存的模型参数和优化器状态恢复训练,校验其后训练的损失变化(epoch
1)是否和不中断时的训练完全一致。
说明:
恢复训练有如下两个要点:
- 保存模型时同时保存模型参数和优化器参数
- 恢复参数时同时恢复模型参数和优化器参数。
.. code:: ipython3
import paddle
from paddle.vision.datasets import MNIST
from paddle.metric import Accuracy
from paddle.static import InputSpec
#
#
train_dataset = MNIST(mode='train')
test_dataset = MNIST(mode='test')
paddle.disable_static()
inputs = InputSpec([None, 784], 'float32', 'x')
labels = InputSpec([None, 10], 'float32', 'x')
model = paddle.Model(MyModel(), inputs, labels)
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
model.load("./mnist_checkpoint/final")
model.prepare(
optim,
paddle.nn.loss.CrossEntropyLoss(),
Accuracy(topk=(1, 2))
)
model.fit(train_data=train_dataset,
eval_data=test_dataset,
batch_size=64,
log_freq=100,
epochs=2
)
.. parsed-literal::
Epoch 1/2
step 100/938 - loss: 1.4635 - acc_top1: 0.9650 - acc_top2: 0.9898 - 15ms/step
step 200/938 - loss: 1.5459 - acc_top1: 0.9659 - acc_top2: 0.9897 - 15ms/step
step 300/938 - loss: 1.5109 - acc_top1: 0.9658 - acc_top2: 0.9893 - 15ms/step
step 400/938 - loss: 1.4797 - acc_top1: 0.9664 - acc_top2: 0.9899 - 15ms/step
step 500/938 - loss: 1.4786 - acc_top1: 0.9673 - acc_top2: 0.9902 - 15ms/step
step 600/938 - loss: 1.5082 - acc_top1: 0.9679 - acc_top2: 0.9906 - 15ms/step
step 700/938 - loss: 1.4768 - acc_top1: 0.9687 - acc_top2: 0.9909 - 15ms/step
step 800/938 - loss: 1.4638 - acc_top1: 0.9696 - acc_top2: 0.9913 - 15ms/step
step 900/938 - loss: 1.5058 - acc_top1: 0.9704 - acc_top2: 0.9916 - 15ms/step
step 938/938 - loss: 1.4702 - acc_top1: 0.9708 - acc_top2: 0.9917 - 15ms/step
Eval begin...
step 100/157 - loss: 1.4613 - acc_top1: 0.9755 - acc_top2: 0.9944 - 5ms/step
step 157/157 - loss: 1.4612 - acc_top1: 0.9805 - acc_top2: 0.9956 - 5ms/step
Eval samples: 10000
Epoch 2/2
step 100/938 - loss: 1.4832 - acc_top1: 0.9789 - acc_top2: 0.9927 - 15ms/step
step 200/938 - loss: 1.4618 - acc_top1: 0.9779 - acc_top2: 0.9932 - 14ms/step
step 300/938 - loss: 1.4613 - acc_top1: 0.9779 - acc_top2: 0.9929 - 15ms/step
step 400/938 - loss: 1.4765 - acc_top1: 0.9772 - acc_top2: 0.9932 - 15ms/step
step 500/938 - loss: 1.4932 - acc_top1: 0.9775 - acc_top2: 0.9934 - 15ms/step
step 600/938 - loss: 1.4773 - acc_top1: 0.9773 - acc_top2: 0.9936 - 15ms/step
step 700/938 - loss: 1.4612 - acc_top1: 0.9783 - acc_top2: 0.9939 - 15ms/step
step 800/938 - loss: 1.4653 - acc_top1: 0.9779 - acc_top2: 0.9939 - 15ms/step
step 900/938 - loss: 1.4639 - acc_top1: 0.9780 - acc_top2: 0.9939 - 15ms/step
step 938/938 - loss: 1.4678 - acc_top1: 0.9779 - acc_top2: 0.9937 - 15ms/step
Eval begin...
step 100/157 - loss: 1.4612 - acc_top1: 0.9733 - acc_top2: 0.9945 - 6ms/step
step 157/157 - loss: 1.4612 - acc_top1: 0.9778 - acc_top2: 0.9952 - 6ms/step
Eval samples: 10000
总结
----
以上就是用Mnist手写数字识别的例子对保存模型、加载模型、恢复训练进行讲解,Paddle提供了很多保存和加载的API方法,您可以根据自己的需求进行选择。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册