未验证 提交 90ff1f7f 编写于 作者: H Huang Zhengjie 提交者: GitHub

Merge pull request #11 from PaddlePaddle/main

Merge
# data and log
/examples/GaAN/dataset/
/examples/GaAN/log/
/examples/GaAN/__pycache__/
/examples/GaAN/params/
/DoorGod
# Virtualenv
/.venv/
/venv/
......
......@@ -30,13 +30,13 @@ The newly released PGL supports heterogeneous graph learning on both walk based
## Highlight: Efficiency - Support Scatter-Gather and LodTensor Message Passing
One of the most important benefits of graph neural networks compared to other models is the ability to use node-to-node connectivity information, but coding the communication between nodes is very cumbersome. At PGL we adopt **Message Passing Paradigm** similar to [DGL](https://github.com/dmlc/dgl) to help to build a customize graph neural network easily. Users only need to write ```send``` and ```recv``` functions to easily implement a simple GCN. As shown in the following figure, for the first step the send function is defined on the edges of the graph, and the user can customize the send function ![](http://latex.codecogs.com/gif.latex?\\phi^e}) to send the message from the source to the target node. For the second step, the recv function ![](http://latex.codecogs.com/gif.latex?\\phi^v}) is responsible for aggregating ![](http://latex.codecogs.com/gif.latex?\\oplus}) messages together from different sources.
One of the most important benefits of graph neural networks compared to other models is the ability to use node-to-node connectivity information, but coding the communication between nodes is very cumbersome. At PGL we adopt **Message Passing Paradigm** similar to [DGL](https://github.com/dmlc/dgl) to help to build a customize graph neural network easily. Users only need to write ```send``` and ```recv``` functions to easily implement a simple GCN. As shown in the following figure, for the first step the send function is defined on the edges of the graph, and the user can customize the send function ![](http://latex.codecogs.com/gif.latex?\\phi^e) to send the message from the source to the target node. For the second step, the recv function ![](http://latex.codecogs.com/gif.latex?\\phi^v) is responsible for aggregating ![](http://latex.codecogs.com/gif.latex?\\oplus) messages together from different sources.
<img src="./docs/source/_static/message_passing_paradigm.png" alt="The basic idea of message passing paradigm" width="800">
As shown in the left of the following figure, to adapt general user-defined message aggregate functions, DGL uses the degree bucketing method to combine nodes with the same degree into a batch and then apply an aggregate function ![](http://latex.codecogs.com/gif.latex?\\oplus}) on each batch serially. For our PGL UDF aggregate function, we organize the message as a [LodTensor](http://www.paddlepaddle.org/documentation/docs/en/1.4/user_guides/howto/basic_concept/lod_tensor_en.html) in [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) taking the message as variable length sequences. And we **utilize the features of LodTensor in Paddle to obtain fast parallel aggregation**.
As shown in the left of the following figure, to adapt general user-defined message aggregate functions, DGL uses the degree bucketing method to combine nodes with the same degree into a batch and then apply an aggregate function ![](http://latex.codecogs.com/gif.latex?\\oplus) on each batch serially. For our PGL UDF aggregate function, we organize the message as a [LodTensor](http://www.paddlepaddle.org/documentation/docs/en/1.4/user_guides/howto/basic_concept/lod_tensor_en.html) in [PaddlePaddle](https://github.com/PaddlePaddle/Paddle) taking the message as variable length sequences. And we **utilize the features of LodTensor in Paddle to obtain fast parallel aggregation**.
<img src="./docs/source/_static/parallel_degree_bucketing.png" alt="The parallel degree bucketing of PGL" width="800">
......
......@@ -29,11 +29,11 @@ Paddle Graph Learning (PGL)是一个基于[PaddlePaddle](https://github.com/Padd
# 特色:高效性——支持Scatter-Gather及LodTensor消息传递
对比于一般的模型,图神经网络模型最大的优势在于它利用了节点与节点之间连接的信息。但是,如何通过代码来实现建模这些节点连接十分的麻烦。PGL采用与[DGL](https://github.com/dmlc/dgl)相似的**消息传递范式**用于作为构建图神经网络的接口。用于只需要简单的编写```send```还有```recv```函数就能够轻松的实现一个简单的GCN网络。如下图所示,首先,send函数被定义在节点之间的边上,用户自定义send函数![](http://latex.codecogs.com/gif.latex?\\phi^e})会把消息从源点发送到目标节点。然后,recv函数![](http://latex.codecogs.com/gif.latex?\\phi^v})负责将这些消息用汇聚函数 ![](http://latex.codecogs.com/gif.latex?\\oplus}) 汇聚起来。
对比于一般的模型,图神经网络模型最大的优势在于它利用了节点与节点之间连接的信息。但是,如何通过代码来实现建模这些节点连接十分的麻烦。PGL采用与[DGL](https://github.com/dmlc/dgl)相似的**消息传递范式**用于作为构建图神经网络的接口。用于只需要简单的编写```send```还有```recv```函数就能够轻松的实现一个简单的GCN网络。如下图所示,首先,send函数被定义在节点之间的边上,用户自定义send函数![](http://latex.codecogs.com/gif.latex?\\phi^e)会把消息从源点发送到目标节点。然后,recv函数![](http://latex.codecogs.com/gif.latex?\\phi^v)负责将这些消息用汇聚函数 ![](http://latex.codecogs.com/gif.latex?\\oplus) 汇聚起来。
<img src="./docs/source/_static/message_passing_paradigm.png" alt="The basic idea of message passing paradigm" width="800">
如下面左图所示,为了去适配用户定义的汇聚函数,DGL使用了Degree Bucketing来将相同度的节点组合在一个块,然后将汇聚函数![](http://latex.codecogs.com/gif.latex?\\oplus})作用在每个块之上。而对于PGL的用户定义汇聚函数,我们则将消息以PaddlePaddle的[LodTensor](http://www.paddlepaddle.org/documentation/docs/en/1.4/user_guides/howto/basic_concept/lod_tensor_en.html)的形式处理,将若干消息看作一组变长的序列,然后利用**LodTensor在PaddlePaddle的特性进行快速平行的消息聚合**
如下面左图所示,为了去适配用户定义的汇聚函数,DGL使用了Degree Bucketing来将相同度的节点组合在一个块,然后将汇聚函数![](http://latex.codecogs.com/gif.latex?\\oplus)作用在每个块之上。而对于PGL的用户定义汇聚函数,我们则将消息以PaddlePaddle的[LodTensor](http://www.paddlepaddle.org/documentation/docs/en/1.4/user_guides/howto/basic_concept/lod_tensor_en.html)的形式处理,将若干消息看作一组变长的序列,然后利用**LodTensor在PaddlePaddle的特性进行快速平行的消息聚合**
<img src="./docs/source/_static/parallel_degree_bucketing.png" alt="The parallel degree bucketing of PGL" width="800">
......
......@@ -19,8 +19,8 @@ def build_graph():
# Each node can be represented by a d-dimensional feature vector, here for simple, the feature vectors are randomly generated.
d = 16
feature = np.random.randn(num_node, d).astype("float32")
# each edge also can be represented by a feature vector
edge_feature = np.random.randn(len(edge_list), d).astype("float32")
# each edge has it own weight
edge_feature = np.random.randn(len(edge_list), 1).astype("float32")
# create a graph
g = graph.Graph(num_nodes = num_node,
......@@ -66,13 +66,13 @@ In this tutorial, we use a simple Graph Convolutional Network(GCN) developed by
In PGL, we can easily implement a GCN layer as follows:
```python
# define GCN layer function
def gcn_layer(gw, feature, hidden_size, name, activation):
def gcn_layer(gw, nfeat, efeat, hidden_size, name, activation):
# gw is a GraphWrapper;feature is the feature vectors of nodes
# define message function
def send_func(src_feat, dst_feat, edge_feat):
# In this tutorial, we return the feature vector of the source node as message
return src_feat['h']
return src_feat['h'] * edge_feat['e']
# define reduce function
def recv_func(feat):
......@@ -80,7 +80,7 @@ def gcn_layer(gw, feature, hidden_size, name, activation):
return fluid.layers.sequence_pool(feat, pool_type='sum')
# trigger message to passing
msg = gw.send(send_func, nfeat_list=[('h', feature)])
msg = gw.send(send_func, nfeat_list=[('h', nfeat)], efeat_list=[('e', efeat)])
# recv funciton receives message and trigger reduce funcition to handle message
output = gw.recv(msg, recv_func)
output = fluid.layers.fc(output,
......@@ -92,10 +92,10 @@ def gcn_layer(gw, feature, hidden_size, name, activation):
```
After defining the GCN layer, we can construct a deeper GCN model with two GCN layers.
```python
output = gcn_layer(gw, gw.node_feat['feature'],
output = gcn_layer(gw, gw.node_feat['feature'], gw.edge_feat['edge_feature'],
hidden_size=8, name='gcn_layer_1', activation='relu')
output = gcn_layer(gw, output, hidden_size=1,
name='gcn_layer_2', activation=None)
output = gcn_layer(gw, output, gw.edge_feat['edge_feature'],
hidden_size=1, name='gcn_layer_2', activation=None)
```
## Step 3: data preprocessing
......
# GaAN: Gated Attention Networks for Learning on Large and Spatiotemporal Graphs
[GaAN](https://arxiv.org/abs/1803.07294) is a powerful neural network designed for machine learning on graph. It introduces an gated attention mechanism. Based on PGL, we reproduce the GaAN algorithm and train the model on [ogbn-proteins](https://ogb.stanford.edu/docs/nodeprop/#ogbn-proteins).
## Datasets
The ogbn-proteins dataset will be downloaded in directory ./dataset automatically.
## Dependencies
- [paddlepaddle >= 1.6](https://github.com/paddlepaddle/paddle)
- [pgl 1.1](https://github.com/PaddlePaddle/PGL)
- [ogb 1.1.1](https://github.com/snap-stanford/ogb)
## How to run
```bash
python train.py --lr 1e-2 --rc 0 --batch_size 1024 --epochs 100
```
or
```bash
source main.sh
```
### Hyperparameters
- use_gpu: whether to use gpu or not
- mini_data: use a small dataset to test code
- epochs: number of training epochs
- lr: learning rate
- rc: regularization coefficient
- log_path: the path of log
- batch_size: the number of batch size
- heads: the number of heads of attention
- hidden_size_a: the size of query and key vectors
- hidden_size_v: the size of value vectors
- hidden_size_m: the size of projection space for computing gates
- hidden_size_o: the size of output of GaAN layer
## Performance
We train our models for 100 epochs and report the **rocauc** on the test dataset.
|dataset|mean|std|#experiments|
|-|-|-|-|
|ogbn-proteins|0.7803|0.0073|10|
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package implements common layers to help building
graph neural networks.
"""
import paddle.fluid as fluid
from pgl import graph_wrapper
from pgl.utils import paddle_helper
__all__ = ['gcn', 'gat', 'gin', 'gaan']
def gcn(gw, feature, hidden_size, activation, name, norm=None):
"""Implementation of graph convolutional neural networks (GCN)
This is an implementation of the paper SEMI-SUPERVISED CLASSIFICATION
WITH GRAPH CONVOLUTIONAL NETWORKS (https://arxiv.org/pdf/1609.02907.pdf).
Args:
gw: Graph wrapper object (:code:`StaticGraphWrapper` or :code:`GraphWrapper`)
feature: A tensor with shape (num_nodes, feature_size).
hidden_size: The hidden size for gcn.
activation: The activation for the output.
name: Gcn layer names.
norm: If :code:`norm` is not None, then the feature will be normalized. Norm must
be tensor with shape (num_nodes,) and dtype float32.
Return:
A tensor with shape (num_nodes, hidden_size)
"""
def send_src_copy(src_feat, dst_feat, edge_feat):
return src_feat["h"]
size = feature.shape[-1]
if size > hidden_size:
feature = fluid.layers.fc(feature,
size=hidden_size,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name))
if norm is not None:
feature = feature * norm
msg = gw.send(send_src_copy, nfeat_list=[("h", feature)])
if size > hidden_size:
output = gw.recv(msg, "sum")
else:
output = gw.recv(msg, "sum")
output = fluid.layers.fc(output,
size=hidden_size,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name))
if norm is not None:
output = output * norm
bias = fluid.layers.create_parameter(
shape=[hidden_size],
dtype='float32',
is_bias=True,
name=name + '_bias')
output = fluid.layers.elementwise_add(output, bias, act=activation)
return output
def gat(gw,
feature,
hidden_size,
activation,
name,
num_heads=8,
feat_drop=0.6,
attn_drop=0.6,
is_test=False):
"""Implementation of graph attention networks (GAT)
This is an implementation of the paper GRAPH ATTENTION NETWORKS
(https://arxiv.org/abs/1710.10903).
Args:
gw: Graph wrapper object (:code:`StaticGraphWrapper` or :code:`GraphWrapper`)
feature: A tensor with shape (num_nodes, feature_size).
hidden_size: The hidden size for gat.
activation: The activation for the output.
name: Gat layer names.
num_heads: The head number in gat.
feat_drop: Dropout rate for feature.
attn_drop: Dropout rate for attention.
is_test: Whether in test phrase.
Return:
A tensor with shape (num_nodes, hidden_size * num_heads)
"""
def send_attention(src_feat, dst_feat, edge_feat):
output = src_feat["left_a"] + dst_feat["right_a"]
output = fluid.layers.leaky_relu(
output, alpha=0.2) # (num_edges, num_heads)
return {"alpha": output, "h": src_feat["h"]}
def reduce_attention(msg):
alpha = msg["alpha"] # lod-tensor (batch_size, seq_len, num_heads)
h = msg["h"]
alpha = paddle_helper.sequence_softmax(alpha)
old_h = h
h = fluid.layers.reshape(h, [-1, num_heads, hidden_size])
alpha = fluid.layers.reshape(alpha, [-1, num_heads, 1])
if attn_drop > 1e-15:
alpha = fluid.layers.dropout(
alpha,
dropout_prob=attn_drop,
is_test=is_test,
dropout_implementation="upscale_in_train")
h = h * alpha
h = fluid.layers.reshape(h, [-1, num_heads * hidden_size])
h = fluid.layers.lod_reset(h, old_h)
return fluid.layers.sequence_pool(h, "sum")
if feat_drop > 1e-15:
feature = fluid.layers.dropout(
feature,
dropout_prob=feat_drop,
is_test=is_test,
dropout_implementation='upscale_in_train')
ft = fluid.layers.fc(feature,
hidden_size * num_heads,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_weight'))
left_a = fluid.layers.create_parameter(
shape=[num_heads, hidden_size],
dtype='float32',
name=name + '_gat_l_A')
right_a = fluid.layers.create_parameter(
shape=[num_heads, hidden_size],
dtype='float32',
name=name + '_gat_r_A')
reshape_ft = fluid.layers.reshape(ft, [-1, num_heads, hidden_size])
left_a_value = fluid.layers.reduce_sum(reshape_ft * left_a, -1)
right_a_value = fluid.layers.reduce_sum(reshape_ft * right_a, -1)
msg = gw.send(
send_attention,
nfeat_list=[("h", ft), ("left_a", left_a_value),
("right_a", right_a_value)])
output = gw.recv(msg, reduce_attention)
bias = fluid.layers.create_parameter(
shape=[hidden_size * num_heads],
dtype='float32',
is_bias=True,
name=name + '_bias')
bias.stop_gradient = True
output = fluid.layers.elementwise_add(output, bias, act=activation)
return output
def gin(gw,
feature,
hidden_size,
activation,
name,
init_eps=0.0,
train_eps=False):
"""Implementation of Graph Isomorphism Network (GIN) layer.
This is an implementation of the paper How Powerful are Graph Neural Networks?
(https://arxiv.org/pdf/1810.00826.pdf).
In their implementation, all MLPs have 2 layers. Batch normalization is applied
on every hidden layer.
Args:
gw: Graph wrapper object (:code:`StaticGraphWrapper` or :code:`GraphWrapper`)
feature: A tensor with shape (num_nodes, feature_size).
name: GIN layer names.
hidden_size: The hidden size for gin.
activation: The activation for the output.
init_eps: float, optional
Initial :math:`\epsilon` value, default is 0.
train_eps: bool, optional
if True, :math:`\epsilon` will be a learnable parameter.
Return:
A tensor with shape (num_nodes, hidden_size).
"""
def send_src_copy(src_feat, dst_feat, edge_feat):
return src_feat["h"]
epsilon = fluid.layers.create_parameter(
shape=[1, 1],
dtype="float32",
attr=fluid.ParamAttr(name="%s_eps" % name),
default_initializer=fluid.initializer.ConstantInitializer(
value=init_eps))
if not train_eps:
epsilon.stop_gradient = True
msg = gw.send(send_src_copy, nfeat_list=[("h", feature)])
output = gw.recv(msg, "sum") + feature * (epsilon + 1.0)
output = fluid.layers.fc(output,
size=hidden_size,
act=None,
param_attr=fluid.ParamAttr(name="%s_w_0" % name),
bias_attr=fluid.ParamAttr(name="%s_b_0" % name))
output = fluid.layers.layer_norm(
output,
begin_norm_axis=1,
param_attr=fluid.ParamAttr(
name="norm_scale_%s" % (name),
initializer=fluid.initializer.Constant(1.0)),
bias_attr=fluid.ParamAttr(
name="norm_bias_%s" % (name),
initializer=fluid.initializer.Constant(0.0)), )
if activation is not None:
output = getattr(fluid.layers, activation)(output)
output = fluid.layers.fc(output,
size=hidden_size,
act=activation,
param_attr=fluid.ParamAttr(name="%s_w_1" % name),
bias_attr=fluid.ParamAttr(name="%s_b_1" % name))
return output
def gaan(gw, feature, hidden_size_a, hidden_size_v, hidden_size_m, hidden_size_o, heads, name):
"""Implementation of GaAN"""
def send_func(src_feat, dst_feat, edge_feat):
# attention score of each edge
# E * (M * D1)
feat_query, feat_key = dst_feat['feat_query'], src_feat['feat_key']
# E * M * D1
old = feat_query
feat_query = fluid.layers.reshape(feat_query, [-1, heads, hidden_size_a])
feat_key = fluid.layers.reshape(feat_key, [-1, heads, hidden_size_a])
# E * M
alpha = fluid.layers.reduce_sum(feat_key * feat_query, dim=-1)
return {'dst_node_feat': dst_feat['node_feat'],
'src_node_feat': src_feat['node_feat'],
'feat_value': src_feat['feat_value'],
'alpha': alpha,
'feat_gate': src_feat['feat_gate']}
def recv_func(message):
dst_feat = message['dst_node_feat']
src_feat = message['src_node_feat']
x = fluid.layers.sequence_pool(dst_feat, 'average')
z = fluid.layers.sequence_pool(src_feat, 'average')
feat_gate = message['feat_gate']
g_max = fluid.layers.sequence_pool(feat_gate, 'max')
g = fluid.layers.concat([x, g_max, z], axis=1)
g = fluid.layers.fc(g, heads, bias_attr=False, act="sigmoid")
# softmax
alpha = message['alpha']
alpha = paddle_helper.sequence_softmax(alpha) # E * M
feat_value = message['feat_value'] # E * (M * D2)
old = feat_value
feat_value = fluid.layers.reshape(feat_value, [-1, heads, hidden_size_v]) # E * M * D2
feat_value = fluid.layers.elementwise_mul(feat_value, alpha, axis=0)
feat_value = fluid.layers.reshape(feat_value, [-1, heads*hidden_size_v]) # E * (M * D2)
feat_value = fluid.layers.lod_reset(feat_value, old)
feat_value = fluid.layers.sequence_pool(feat_value, 'sum') # N * (M * D2)
feat_value = fluid.layers.reshape(feat_value, [-1, heads, hidden_size_v]) # N * M * D2
output = fluid.layers.elementwise_mul(feat_value, g, axis=0)
output = fluid.layers.reshape(output, [-1, heads * hidden_size_v]) # N * (M * D2)
output = fluid.layers.concat([x, output], axis=1)
return output
# N * (D1 * M)
feat_key = fluid.layers.fc(feature, hidden_size_a * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_key'))
# N * (D2 * M)
feat_value = fluid.layers.fc(feature, hidden_size_v * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_value'))
# N * (D1 * M)
feat_query = fluid.layers.fc(feature, hidden_size_a * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_query'))
# N * Dm
feat_gate = fluid.layers.fc(feature, hidden_size_m, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_gate'))
# send stage
message = gw.send(
send_func,
nfeat_list=[('node_feat', feature), ('feat_key', feat_key), ('feat_value', feat_value),
('feat_query', feat_query), ('feat_gate', feat_gate)],
efeat_list=None,
)
# recv stage
output = gw.recv(message, recv_func)
output = fluid.layers.fc(output, hidden_size_o, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_output'))
output = fluid.layers.leaky_relu(output, alpha=0.1)
output = fluid.layers.dropout(output, dropout_prob=0.1)
return output
python3 train.py --epochs 100 --lr 1e-2 --rc 0 --batch_size 1024 --gpu_id 0 --exp_id 0
\ No newline at end of file
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from paddle import fluid
from pgl.utils import paddle_helper
# from pgl.layers import gaan
from conv import gaan
class GaANModel(object):
def __init__(self, num_class, num_layers, hidden_size_a=24,
hidden_size_v=32, hidden_size_m=64, hidden_size_o=128,
heads=8, act='relu', name="GaAN"):
self.num_class = num_class
self.num_layers = num_layers
self.hidden_size_a = hidden_size_a
self.hidden_size_v = hidden_size_v
self.hidden_size_m = hidden_size_m
self.hidden_size_o = hidden_size_o
self.act = act
self.name = name
self.heads = heads
def forward(self, gw):
feature = gw.node_feat['node_feat']
for i in range(self.num_layers):
feature = gaan(gw, feature, self.hidden_size_a, self.hidden_size_v,
self.hidden_size_m, self.hidden_size_o, self.heads,
self.name+'_'+str(i))
pred = fluid.layers.fc(
feature, self.num_class, act=None, name=self.name + "_pred_output")
return pred
\ No newline at end of file
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
from ogb.nodeproppred import NodePropPredDataset, Evaluator
import pgl
import numpy as np
import os
import time
def get_graph_data(d_name="ogbn-proteins", mini_data=False):
"""
Param:
d_name: name of dataset
mini_data: if mini_data==True, only use a small dataset (for test)
"""
# import ogb data
dataset = NodePropPredDataset(name = d_name)
num_tasks = dataset.num_tasks # obtaining the number of prediction tasks in a dataset
split_idx = dataset.get_idx_split()
train_idx, valid_idx, test_idx = split_idx["train"], split_idx["valid"], split_idx["test"]
graph, label = dataset[0]
# reshape
graph["edge_index"] = graph["edge_index"].T
# mini dataset
if mini_data:
graph['num_nodes'] = 500
mask = (graph['edge_index'][:, 0] < 500)*(graph['edge_index'][:, 1] < 500)
graph["edge_index"] = graph["edge_index"][mask]
graph["edge_feat"] = graph["edge_feat"][mask]
label = label[:500]
train_idx = np.arange(0,400)
valid_idx = np.arange(400,450)
test_idx = np.arange(450,500)
# read/compute node feature
if mini_data:
node_feat_path = './dataset/ogbn_proteins_node_feat_small.npy'
else:
node_feat_path = './dataset/ogbn_proteins_node_feat.npy'
new_node_feat = None
if os.path.exists(node_feat_path):
print("Begin: read node feature".center(50, '='))
new_node_feat = np.load(node_feat_path)
print("End: read node feature".center(50, '='))
else:
print("Begin: compute node feature".center(50, '='))
start = time.perf_counter()
for i in range(graph['num_nodes']):
if i % 100 == 0:
dur = time.perf_counter() - start
print("{}/{}({}%), times: {:.2f}s".format(
i, graph['num_nodes'], i/graph['num_nodes']*100, dur
))
mask = (graph['edge_index'][:, 0] == i)
current_node_feat = np.mean(np.compress(mask, graph['edge_feat'], axis=0),
axis=0, keepdims=True)
if i == 0:
new_node_feat = [current_node_feat]
else:
new_node_feat.append(current_node_feat)
new_node_feat = np.concatenate(new_node_feat, axis=0)
print("End: compute node feature".center(50,'='))
print("Saving node feature in "+node_feat_path.center(50, '='))
np.save(node_feat_path, new_node_feat)
print("Saving finish".center(50,'='))
print(new_node_feat)
# create graph
g = pgl.graph.Graph(
num_nodes=graph["num_nodes"],
edges = graph["edge_index"],
node_feat = {'node_feat': new_node_feat},
edge_feat = None
)
print("Create graph")
print(g)
return g, label, train_idx, valid_idx, test_idx, Evaluator(d_name)
\ No newline at end of file
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import pickle as pkl
import paddle
import paddle.fluid as fluid
import pgl
import time
from pgl.utils import mp_reader
from pgl.utils.logger import log
import time
import copy
def node_batch_iter(nodes, node_label, batch_size):
"""node_batch_iter
"""
perm = np.arange(len(nodes))
np.random.shuffle(perm)
start = 0
while start < len(nodes):
index = perm[start:start + batch_size]
start += batch_size
yield nodes[index], node_label[index]
def traverse(item):
"""traverse
"""
if isinstance(item, list) or isinstance(item, np.ndarray):
for i in iter(item):
for j in traverse(i):
yield j
else:
yield item
def flat_node_and_edge(nodes):
"""flat_node_and_edge
"""
nodes = list(set(traverse(nodes)))
return nodes
def worker(batch_info, graph, graph_wrapper, samples):
"""Worker
"""
def work():
"""work
"""
_graph_wrapper = copy.copy(graph_wrapper)
_graph_wrapper.node_feat_tensor_dict = {}
for batch_train_samples, batch_train_labels in batch_info:
start_nodes = batch_train_samples
nodes = start_nodes
edges = []
for max_deg in samples:
pred_nodes = graph.sample_predecessor(
start_nodes, max_degree=max_deg)
for dst_node, src_nodes in zip(start_nodes, pred_nodes):
for src_node in src_nodes:
edges.append((src_node, dst_node))
last_nodes = nodes
nodes = [nodes, pred_nodes]
nodes = flat_node_and_edge(nodes)
# Find new nodes
start_nodes = list(set(nodes) - set(last_nodes))
if len(start_nodes) == 0:
break
subgraph = graph.subgraph(
nodes=nodes,
edges=edges,
with_node_feat=True,
with_edge_feat=True)
sub_node_index = subgraph.reindex_from_parrent_nodes(
batch_train_samples)
feed_dict = _graph_wrapper.to_feed(subgraph)
feed_dict["node_label"] = batch_train_labels
feed_dict["node_index"] = sub_node_index
feed_dict["parent_node_index"] = np.array(nodes, dtype="int64")
yield feed_dict
return work
def multiprocess_graph_reader(graph,
graph_wrapper,
samples,
node_index,
batch_size,
node_label,
with_parent_node_index=False,
num_workers=4):
"""multiprocess_graph_reader
"""
def parse_to_subgraph(rd, prefix, node_feat, _with_parent_node_index):
"""parse_to_subgraph
"""
def work():
"""work
"""
for data in rd():
feed_dict = data
for key in node_feat:
feed_dict[prefix + '/node_feat/' + key] = node_feat[key][
feed_dict["parent_node_index"]]
if not _with_parent_node_index:
del feed_dict["parent_node_index"]
yield feed_dict
return work
def reader():
"""reader"""
batch_info = list(
node_batch_iter(
node_index, node_label, batch_size=batch_size))
block_size = int(len(batch_info) / num_workers + 1)
reader_pool = []
for i in range(num_workers):
reader_pool.append(
worker(batch_info[block_size * i:block_size * (i + 1)], graph,
graph_wrapper, samples))
if len(reader_pool) == 1:
r = parse_to_subgraph(reader_pool[0],
repr(graph_wrapper), graph.node_feat,
with_parent_node_index)
else:
multi_process_sample = mp_reader.multiprocess_reader(
reader_pool, use_pipe=True, queue_size=1000)
r = parse_to_subgraph(multi_process_sample,
repr(graph_wrapper), graph.node_feat,
with_parent_node_index)
return paddle.reader.buffered(r, num_workers)
return reader()
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from preprocess import get_graph_data
import pgl
import argparse
import numpy as np
import time
from paddle import fluid
import reader
from train_tool import train_epoch, valid_epoch
from model import GaANModel
if __name__ == "__main__":
parser = argparse.ArgumentParser(description="ogb Training")
parser.add_argument("--d_name", type=str, choices=["ogbn-proteins"], default="ogbn-proteins",
help="the name of dataset in ogb")
parser.add_argument("--model", type=str, choices=["GaAN"], default="GaAN",
help="the name of model")
parser.add_argument("--mini_data", type=str, choices=["True", "False"], default="False",
help="use a small dataset to test the code")
parser.add_argument("--use_gpu", type=bool, choices=[True, False], default=True,
help="use gpu")
parser.add_argument("--gpu_id", type=int, default=0,
help="the id of gpu")
parser.add_argument("--exp_id", type=int, default=0,
help="the id of experiment")
parser.add_argument("--epochs", type=int, default=100,
help="the number of training epochs")
parser.add_argument("--lr", type=float, default=1e-2,
help="learning rate of Adam")
parser.add_argument("--rc", type=float, default=0,
help="regularization coefficient")
parser.add_argument("--log_path", type=str, default="./log",
help="the path of log")
parser.add_argument("--batch_size", type=int, default=1024,
help="the number of batch size")
parser.add_argument("--heads", type=int, default=8,
help="the number of heads of attention")
parser.add_argument("--hidden_size_a", type=int, default=24,
help="the hidden size of query and key vectors")
parser.add_argument("--hidden_size_v", type=int, default=32,
help="the hidden size of value vectors")
parser.add_argument("--hidden_size_m", type=int, default=64,
help="the hidden size of projection for computing gates")
parser.add_argument("--hidden_size_o", type=int ,default=128,
help="the hidden size of each layer in GaAN")
args = parser.parse_args()
print("Parameters Setting".center(50, "="))
print("lr = {}, rc = {}, epochs = {}, batch_size = {}".format(args.lr, args.rc, args.epochs,
args.batch_size))
print("Experiment ID: {}".format(args.exp_id).center(50, "="))
print("training in GPU: {}".format(args.gpu_id).center(50, "="))
d_name = args.d_name
# get data
g, label, train_idx, valid_idx, test_idx, evaluator = get_graph_data(d_name=d_name,
mini_data=eval(args.mini_data))
if args.model == "GaAN":
graph_model = GaANModel(112, 3, args.hidden_size_a, args.hidden_size_v, args.hidden_size_m,
args.hidden_size_o, args.heads)
# training
samples = [25, 10] # 2-hop sample size
batch_size = args.batch_size
sample_workers = 1
place = fluid.CUDAPlace(args.gpu_id) if args.use_gpu else fluid.CPUPlace()
train_program = fluid.Program()
startup_program = fluid.Program()
with fluid.program_guard(train_program, startup_program):
gw = pgl.graph_wrapper.GraphWrapper(
name='graph',
place = place,
node_feat=g.node_feat_info(),
edge_feat=g.edge_feat_info()
)
node_index = fluid.layers.data('node_index', shape=[None, 1], dtype="int64",
append_batch_size=False)
node_label = fluid.layers.data('node_label', shape=[None, 112], dtype="float32",
append_batch_size=False)
parent_node_index = fluid.layers.data('parent_node_index', shape=[None, 1], dtype="int64",
append_batch_size=False)
output = graph_model.forward(gw)
output = fluid.layers.gather(output, node_index)
score = fluid.layers.sigmoid(output)
loss = fluid.layers.sigmoid_cross_entropy_with_logits(
x=output, label=node_label)
loss = fluid.layers.mean(loss)
val_program = train_program.clone(for_test=True)
with fluid.program_guard(train_program, startup_program):
lr = args.lr
adam = fluid.optimizer.Adam(
learning_rate=lr,
regularization=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=args.rc))
adam.minimize(loss)
exe = fluid.Executor(place)
exe.run(startup_program)
train_iter = reader.multiprocess_graph_reader(
g,
gw,
samples=samples,
num_workers=sample_workers,
batch_size=batch_size,
with_parent_node_index=True,
node_index=train_idx,
node_label=np.array(label[train_idx], dtype='float32'))
val_iter = reader.multiprocess_graph_reader(
g,
gw,
samples=samples,
num_workers=sample_workers,
batch_size=batch_size,
with_parent_node_index=True,
node_index=valid_idx,
node_label=np.array(label[valid_idx], dtype='float32'))
test_iter = reader.multiprocess_graph_reader(
g,
gw,
samples=samples,
num_workers=sample_workers,
batch_size=batch_size,
with_parent_node_index=True,
node_index=test_idx,
node_label=np.array(label[test_idx], dtype='float32'))
start = time.time()
print("Training Begin".center(50, "="))
best_valid = -1.0
for epoch in range(args.epochs):
start_e = time.time()
train_loss, train_rocauc = train_epoch(
train_iter, program=train_program, exe=exe, loss=loss, score=score,
evaluator=evaluator, epoch=epoch
)
valid_loss, valid_rocauc = valid_epoch(
val_iter, program=val_program, exe=exe, loss=loss, score=score,
evaluator=evaluator, epoch=epoch)
end_e = time.time()
print("Epoch {}: train_loss={:.4},val_loss={:.4}, train_rocauc={:.4}, val_rocauc={:.4}, s/epoch={:.3}".format(
epoch, train_loss, valid_loss, train_rocauc, valid_rocauc, end_e-start_e
))
if valid_rocauc > best_valid:
print("Update: new {}, old {}".format(valid_rocauc, best_valid))
best_valid = valid_rocauc
fluid.io.save_params(executor=exe, dirname='./params/'+str(args.exp_id), main_program=val_program)
print("Test Stage".center(50, "="))
fluid.io.load_params(executor=exe, dirname='./params/'+str(args.exp_id), main_program=val_program)
test_loss, test_rocauc = valid_epoch(
test_iter, program=val_program, exe=exe, loss=loss, score=score,
evaluator=evaluator, epoch=epoch)
end = time.time()
print("test_loss={:.4},test_rocauc={:.4}, Total Time={:.3}".format(
test_loss, test_rocauc, end-start
))
print("End".center(50, "="))
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import time
from pgl.utils.logger import log
def train_epoch(batch_iter, exe, program, loss, score, evaluator, epoch, log_per_step=1):
batch = 0
total_loss = 0.0
total_sample = 0
result = 0
for batch_feed_dict in batch_iter():
batch += 1
batch_loss, y_pred = exe.run(program, fetch_list=[loss, score], feed=batch_feed_dict)
num_samples = len(batch_feed_dict["node_index"])
total_loss += batch_loss * num_samples
total_sample += num_samples
input_dict = {
"y_true": batch_feed_dict["node_label"],
"y_pred": y_pred
}
result += evaluator.eval(input_dict)["rocauc"]
return total_loss.item()/total_sample, result/batch
def valid_epoch(batch_iter, exe, program, loss, score, evaluator, epoch, log_per_step=1):
batch = 0
total_sample = 0
result = 0
total_loss = 0.0
for batch_feed_dict in batch_iter():
batch += 1
batch_loss, y_pred = exe.run(program, fetch_list=[loss, score], feed=batch_feed_dict)
input_dict = {
"y_true": batch_feed_dict["node_label"],
"y_pred": y_pred
}
result += evaluator.eval(input_dict)["rocauc"]
num_samples = len(batch_feed_dict["node_index"])
total_loss += batch_loss * num_samples
total_sample += num_samples
return total_loss.item()/total_sample, result/batch
# Self-Attention Graph Pooling
SAGPool is a graph pooling method based on self-attention. Self-attention uses graph convolution, which allows the pooling method to consider both node features and graph topology. Based on PGL, we implement the SAGPool algorithm and train the model on five datasets.
## Datasets
There are five datasets, including D&D, PROTEINS, NCI1, NCI109 and FRANKENSTEIN. You can download the datasets from [here](https://bj.bcebos.com/paddle-pgl/SAGPool/data.zip), and unzip it directly. The pkl format datasets should be in directory ./data.
## Dependencies
- [paddlepaddle >= 1.8](https://github.com/PaddlePaddle/paddle)
- [pgl 1.1](https://github.com/PaddlePaddle/PGL)
## How to run
```
python main.py --dataset_name DD --learning_rate 0.005 --weight_decay 0.00001
python main.py --dataset_name PROTEINS --learning_rate 0.001 --hidden_size 32 --weight_decay 0.00001
python main.py --dataset_name NCI1 --learning_rate 0.001 --weight_decay 0.00001
python main.py --dataset_name NCI109 --learning_rate 0.0005 --hidden_size 64 --weight_decay 0.0001 --patience 200
python main.py --dataset_name FRANKENSTEIN --learning_rate 0.001 --weight_decay 0.0001
```
## Hyperparameters
- seed: random seed
- batch\_size: the number of batch size
- learning\_rate: learning rate of optimizer
- weight\_decay: the weight decay for L2 regularization
- hidden\_size: the hidden size of gcn
- pooling\_ratio: the pooling ratio of SAGPool
- dropout\_ratio: the number of dropout ratio
- dataset\_name: the name of datasets, including DD, PROTEINS, NCI1, NCI109, FRANKENSTEIN
- epochs: maximum number of epochs
- patience: patience for early stopping
- use\_cuda: whether to use cuda
- save\_model: the name for the best model
## Performance
We evaluate the implemented method for 20 random seeds using 10-fold cross validation, following the same training procedures as in the paper.
| dataset | mean accuracy | standard deviation | mean accuracy(paper) | standard deviation(paper) |
| ------------ | ------------- | ------------------ | -------------------- | ------------------------- |
| DD | 74.4181 | 1.0244 | 76.19 | 0.94 |
| PROTEINS | 72.7858 | 0.6617 | 70.04 | 1.47 |
| NCI1 | 75.781 | 1.2125 | 74.18 | 1.2 |
| NCI109 | 74.3156 | 1.3 | 74.06 | 0.78 |
| FRANKENSTEIN | 60.7826 | 0.629 | 62.57 | 0.6 |
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
parser = argparse.ArgumentParser()
parser.add_argument('--seed', type=int, default=777,
help='seed')
parser.add_argument('--batch_size', type=int, default=128,
help='batch size')
parser.add_argument('--learning_rate', type=float, default=0.0005,
help='learning rate')
parser.add_argument('--weight_decay', type=float, default=0.0001,
help='weight decay')
parser.add_argument('--hidden_size', type=int, default=128,
help='gcn hidden size')
parser.add_argument('--pooling_ratio', type=float, default=0.5,
help='pooling ratio of SAGPool')
parser.add_argument('--dropout_ratio', type=float, default=0.5,
help='dropout ratio')
parser.add_argument('--dataset_name', type=str, default='DD',
help='DD/PROTEINS/NCI1/NCI109/FRANKENSTEIN')
parser.add_argument('--epochs', type=int, default=100000,
help='maximum number of epochs')
parser.add_argument('--patience', type=int, default=50,
help='patience for early stopping')
parser.add_argument('--use_cuda', type=bool, default=True,
help='use cuda or cpu')
parser.add_argument('--save_model', type=str,
help='save model name')
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
import random
import pgl
from pgl.utils.logger import log
from pgl.graph import Graph, MultiGraph
import numpy as np
import pickle
class BaseDataset(object):
def __init__(self):
pass
def __getitem__(self, idx):
raise NotImplementedError
def __len__(self):
raise NotImplementedError
class Subset(BaseDataset):
"""Subset of a dataset at specified indices.
Args:
dataset (Dataset): The whole Dataset
indices (sequence): Indices in the whole set selected for subset
"""
def __init__(self, dataset, indices):
self.dataset = dataset
self.indices = indices
def __getitem__(self, idx):
return self.dataset[self.indices[idx]]
def __len__(self):
return len(self.indices)
class Dataset(BaseDataset):
def __init__(self, args):
self.args = args
with open('data/%s.pkl' % args.dataset_name, 'rb') as f:
graphs_info_list = pickle.load(f)
self.pgl_graph_list = []
self.graph_label_list = []
for i in range(len(graphs_info_list) - 1):
graph = graphs_info_list[i]
edges_l, edges_r = graph["edge_src"], graph["edge_dst"]
# add self-loops
if self.args.dataset_name != "FRANKENSTEIN":
num_nodes = graph["num_nodes"]
x = np.arange(0, num_nodes)
edges_l = np.append(edges_l, x)
edges_r = np.append(edges_r, x)
edges = list(zip(edges_l, edges_r))
g = pgl.graph.Graph(num_nodes=graph["num_nodes"], edges=edges)
g.node_feat["feat"] = graph["node_feat"]
self.pgl_graph_list.append(g)
self.graph_label_list.append(graph["label"])
self.num_classes = graphs_info_list[-1]["num_classes"]
self.num_features = graphs_info_list[-1]["num_features"]
def __getitem__(self, idx):
return self.pgl_graph_list[idx], self.graph_label_list[idx]
def shuffle(self):
"""shuffle the dataset.
"""
cc = list(zip(self.pgl_graph_list, self.graph_label_list))
random.seed(self.args.seed)
random.shuffle(cc)
a, b = zip(*cc)
self.pgl_graph_list[:], self.graph_label_list[:] = a, b
def __len__(self):
return len(self.pgl_graph_list)
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import paddle.fluid as fluid
import paddle.fluid.layers as L
def norm_gcn(gw, feature, hidden_size, activation, name, norm=None):
"""Implementation of graph convolutional neural networks(GCN), using different
normalization method.
Args:
gw: Graph wrapper object.
feature: A tensor with shape (num_nodes, feature_size).
hidden_size: The hidden size for norm gcn.
activation: The activation for the output.
name: Norm gcn layer names.
norm: If norm is not None, then the feature will be normalized. Norm must
be tensor with shape (num_nodes,) and dtype float32.
Return:
A tensor with shape (num_nodes, hidden_size)
"""
size = feature.shape[-1]
feature = L.fc(feature,
size=hidden_size,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name))
if norm is not None:
src, dst = gw.edges
norm_src = L.gather(norm, src, overwrite=False)
norm_dst = L.gather(norm, dst, overwrite=False)
norm = norm_src * norm_dst
def send_src_copy(src_feat, dst_feat, edge_feat):
return src_feat["h"] * norm
else:
def send_src_copy(src_feat, dst_feat, edge_feat):
return src_feat["h"]
msg = gw.send(send_src_copy, nfeat_list=[("h", feature)])
output = gw.recv(msg, "sum")
bias = L.create_parameter(
shape=[hidden_size],
dtype='float32',
is_bias=True,
name=name + '_bias')
output = L.elementwise_add(output, bias, act=activation)
return output
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import collections
import paddle
import pgl
from pgl.utils.logger import log
from pgl.graph import Graph, MultiGraph
def batch_iter(data, batch_size):
"""node_batch_iter
"""
size = len(data)
perm = np.arange(size)
np.random.shuffle(perm)
start = 0
while start < size:
index = perm[start:start + batch_size]
start += batch_size
yield data[index]
def scan_batch_iter(data, batch_size):
"""scan_batch_iter
"""
batch = []
for example in data.scan():
batch.append(example)
if len(batch) == batch_size:
yield batch
batch = []
if len(batch) > 0:
yield batch
def label_to_onehot(labels):
"""Return one-hot representations of labels
"""
onehot_labels = []
for label in labels:
if label == 0:
onehot_labels.append([1, 0])
else:
onehot_labels.append([0, 1])
onehot_labels = np.array(onehot_labels)
return onehot_labels
class GraphDataloader(object):
"""Graph Dataloader
"""
def __init__(self,
dataset,
graph_wrapper,
batch_size,
seed=0,
buf_size=1000,
shuffle=True):
self.shuffle = shuffle
self.seed = seed
self.batch_size = batch_size
self.dataset = dataset
self.buf_size = buf_size
self.graph_wrapper = graph_wrapper
def batch_fn(self, batch_examples):
""" batch_fun batch producer """
graphs = [b[0] for b in batch_examples]
labels = [b[1] for b in batch_examples]
join_graph = MultiGraph(graphs)
# normalize
indegree = join_graph.indegree()
norm = np.zeros_like(indegree, dtype="float32")
norm[indegree > 0] = np.power(indegree[indegree > 0], -0.5)
join_graph.node_feat["norm"] = np.expand_dims(norm, -1)
feed_dict = self.graph_wrapper.to_feed(join_graph)
labels = np.array(labels)
feed_dict["labels_1dim"] = labels
labels = label_to_onehot(labels)
feed_dict["labels"] = labels
graph_lod = join_graph.graph_lod
graph_id = []
for i in range(1, len(graph_lod)):
graph_node_num = graph_lod[i] - graph_lod[i - 1]
graph_id += [i - 1] * graph_node_num
graph_id = np.array(graph_id, dtype="int32")
feed_dict["graph_id"] = graph_id
return feed_dict
def batch_iter(self):
""" batch_iter """
if self.shuffle:
for batch in batch_iter(self, self.batch_size):
yield batch
else:
for batch in scan_batch_iter(self, self.batch_size):
yield batch
def __len__(self):
"""__len__"""
return len(self.dataset)
def __getitem__(self, idx):
"""__getitem__"""
if isinstance(idx, collections.Iterable):
return [self.dataset[bidx] for bidx in idx]
else:
return self.dataset[idx]
def __iter__(self):
"""__iter__"""
def func_run():
for batch_examples in self.batch_iter():
batch_dict = self.batch_fn(batch_examples)
yield batch_dict
r = paddle.reader.buffered(func_run, self.buf_size)
for batch in r():
yield batch
def scan(self):
"""scan"""
for example in self.dataset:
yield example
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import numpy as np
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as L
import pgl
from pgl.graph_wrapper import GraphWrapper
from pgl.utils.logger import log
from conv import norm_gcn
from pgl.layers.conv import gcn
def topk_pool(gw, score, graph_id, ratio):
"""Implementation of topk pooling, where k means pooling ratio.
Args:
gw: Graph wrapper object.
score: The attention score of all nodes, which is used to select
important nodes.
graph_id: The graphs that the nodes belong to.
ratio: The pooling ratio of nodes we want to select.
Return:
perm: The index of nodes we choose.
ratio_length: The selected node numbers of each graph.
"""
graph_lod = gw.graph_lod
graph_nodes = gw.num_nodes
num_graph = gw.num_graph
num_nodes = L.ones(shape=[graph_nodes], dtype="float32")
num_nodes = L.lod_reset(num_nodes, graph_lod)
num_nodes_per_graph = L.sequence_pool(num_nodes, pool_type='sum')
max_num_nodes = L.reduce_max(num_nodes_per_graph, dim=0)
max_num_nodes = L.cast(max_num_nodes, dtype="int32")
index = L.arange(0, gw.num_nodes, dtype="int64")
offset = L.gather(graph_lod, graph_id, overwrite=False)
index = (index - offset) + (graph_id * max_num_nodes)
index.stop_gradient = True
# padding
dense_score = L.fill_constant(shape=[num_graph * max_num_nodes],
dtype="float32", value=-999999)
index = L.reshape(index, shape=[-1])
dense_score = L.scatter(dense_score, index, updates=score)
num_graph = L.cast(num_graph, dtype="int32")
dense_score = L.reshape(dense_score,
shape=[num_graph, max_num_nodes])
# record the sorted index
_, sort_index = L.argsort(dense_score, axis=-1, descending=True)
# recover the index range
graph_lod = graph_lod[:-1]
graph_lod = L.reshape(graph_lod, shape=[-1, 1])
graph_lod = L.cast(graph_lod, dtype="int64")
sort_index = L.elementwise_add(sort_index, graph_lod, axis=-1)
sort_index = L.reshape(sort_index, shape=[-1, 1])
# use sequence_slice to choose selected node index
pad_lod = L.arange(0, (num_graph + 1) * max_num_nodes, step=max_num_nodes, dtype="int32")
sort_index = L.lod_reset(sort_index, pad_lod)
ratio_length = L.ceil(num_nodes_per_graph * ratio)
ratio_length = L.cast(ratio_length, dtype="int64")
ratio_length = L.reshape(ratio_length, shape=[-1, 1])
offset = L.zeros(shape=[num_graph, 1], dtype="int64")
choose_index = L.sequence_slice(input=sort_index, offset=offset, length=ratio_length)
perm = L.reshape(choose_index, shape=[-1])
return perm, ratio_length
def sag_pool(gw, feature, ratio, graph_id, dataset, name, activation=L.tanh):
"""Implementation of self-attention graph pooling (SAGPool)
This is an implementation of the paper SELF-ATTENTION GRAPH POOLING
(https://arxiv.org/pdf/1904.08082.pdf)
Args:
gw: Graph wrapper object.
feature: A tensor with shape (num_nodes, feature_size).
ratio: The pooling ratio of nodes we want to select.
graph_id: The graphs that the nodes belong to.
dataset: To differentiate FRANKENSTEIN dataset and other datasets.
name: The name of SAGPool layer.
activation: The activation function.
Return:
new_feature: A tensor with shape (num_nodes, feature_size), and the unselected
nodes' feature is masked by zero.
ratio_length: The selected node numbers of each graph.
"""
if dataset == "FRANKENSTEIN":
gcn_ = gcn
else:
gcn_ = norm_gcn
score = gcn_(gw=gw,
feature=feature,
hidden_size=1,
activation=None,
norm=gw.node_feat["norm"],
name=name)
score = L.squeeze(score, axes=[])
perm, ratio_length = topk_pool(gw, score, graph_id, ratio)
mask = L.zeros_like(score)
mask = L.cast(mask, dtype="float32")
updates = L.ones_like(perm)
updates = L.cast(updates, dtype="float32")
mask = L.scatter(mask, perm, updates)
new_feature = L.elementwise_mul(feature, mask, axis=0)
temp_score = activation(score)
new_feature = L.elementwise_mul(new_feature, temp_score, axis=0)
return new_feature, ratio_length
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import os
import argparse
import pgl
from pgl.utils.logger import log
import paddle
import re
import time
import random
import numpy as np
import math
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as L
import pgl
from pgl.utils.logger import log
from model import GlobalModel
from base_dataset import Subset, Dataset
from dataloader import GraphDataloader
from args import parser
import warnings
from sklearn.model_selection import KFold
warnings.filterwarnings("ignore")
def main(args, train_dataset, val_dataset, test_dataset):
"""main function for running one testing results.
"""
log.info("Train Examples: %s" % len(train_dataset))
log.info("Val Examples: %s" % len(val_dataset))
log.info("Test Examples: %s" % len(test_dataset))
train_program = fluid.Program()
train_program.random_seed = args.seed
startup_program = fluid.Program()
startup_program.random_seed = args.seed
if args.use_cuda:
place = fluid.CUDAPlace(0)
else:
place = fluid.CPUPlace()
exe = fluid.Executor(place)
log.info("building model")
with fluid.program_guard(train_program, startup_program):
with fluid.unique_name.guard():
graph_model = GlobalModel(args, dataset)
train_loader = GraphDataloader(train_dataset,
graph_model.graph_wrapper,
batch_size=args.batch_size)
optimizer = fluid.optimizer.Adam(learning_rate=args.learning_rate,
regularization=fluid.regularizer.L2DecayRegularizer(args.weight_decay))
optimizer.minimize(graph_model.loss)
exe.run(startup_program)
test_program = fluid.Program()
test_program = train_program.clone(for_test=True)
val_loader = GraphDataloader(val_dataset,
graph_model.graph_wrapper,
batch_size=args.batch_size,
shuffle=False)
test_loader = GraphDataloader(test_dataset,
graph_model.graph_wrapper,
batch_size=args.batch_size,
shuffle=False)
min_loss = 1e10
global_step = 0
for epoch in range(args.epochs):
for feed_dict in train_loader:
loss, pred = exe.run(train_program,
feed=feed_dict,
fetch_list=[graph_model.loss, graph_model.pred])
log.info("Epoch: %d, global_step: %d, Training loss: %f" \
% (epoch, global_step, loss))
global_step += 1
# validation
valid_loss = 0.
correct = 0.
for feed_dict in val_loader:
valid_loss_, correct_ = exe.run(test_program,
feed=feed_dict,
fetch_list=[graph_model.loss, graph_model.correct])
valid_loss += valid_loss_
correct += correct_
if epoch % 50 == 0:
log.info("Epoch:%d, Validation loss: %f, Validation acc: %f" \
% (epoch, valid_loss, correct / len(val_loader)))
if valid_loss < min_loss:
min_loss = valid_loss
patience = 0
path = "./save/%s" % args.dataset_name
if not os.path.exists(path):
os.makedirs(path)
fluid.save(train_program, "%s/%s" \
% (path, args.save_model))
log.info("Model saved at epoch %d" % epoch)
else:
patience += 1
if patience > args.patience:
break
correct = 0.
fluid.load(test_program, "./save/%s/%s" \
% (args.dataset_name, args.save_model), exe)
for feed_dict in test_loader:
correct_ = exe.run(test_program,
feed=feed_dict,
fetch_list=[graph_model.correct])
correct += correct_[0]
log.info("Test acc: %f" % (correct / len(test_loader)))
return correct / len(test_loader)
def split_10_cv(dataset, args):
"""10 folds cross validation
"""
dataset.shuffle()
X = np.array([0] * len(dataset))
y = X
kf = KFold(n_splits=10, shuffle=False)
i = 1
test_acc = []
for train_index, test_index in kf.split(X, y):
train_val_dataset = Subset(dataset, train_index)
test_dataset = Subset(dataset, test_index)
train_val_index_range = list(range(0, len(train_val_dataset)))
num_val = int(len(train_val_dataset) / 9)
val_dataset = Subset(train_val_dataset, train_val_index_range[:num_val])
train_dataset = Subset(train_val_dataset, train_val_index_range[num_val:])
log.info("######%d fold of 10-fold cross validation######" % i)
i += 1
test_acc_ = main(args, train_dataset, val_dataset, test_dataset)
test_acc.append(test_acc_)
mean_acc = sum(test_acc) / len(test_acc)
return mean_acc, test_acc
def random_seed_20(args, dataset):
"""run for 20 random seeds
"""
alist = random.sample(range(1,1000),20)
test_acc_fold = []
for seed in alist:
log.info('############ Seed %d ############' % seed)
args.seed = seed
test_acc_fold_, _ = split_10_cv(dataset, args)
log.info('Mean test acc at seed %d: %f' % (seed, test_acc_fold_))
test_acc_fold.append(test_acc_fold_)
mean_acc = sum(test_acc_fold) / len(test_acc_fold)
temp = [(acc - mean_acc) * (acc - mean_acc) for acc in test_acc_fold]
standard_std = math.sqrt(sum(temp) / len(test_acc_fold))
log.info('Final mean test acc using 20 random seeds(mean for 10-fold): %f' % (mean_acc))
log.info('Final standard std using 20 random seeds(mean for 10-fold): %f' % (standard_std))
if __name__ == "__main__":
args = parser.parse_args()
log.info('loading data...')
dataset = Dataset(args)
log.info("preprocess finish.")
args.num_classes = dataset.num_classes
args.num_features = dataset.num_features
random_seed_20(args, dataset)
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from random import random
import numpy as np
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as L
import pgl
from pgl.graph import Graph, MultiGraph
from pgl.graph_wrapper import GraphWrapper
from pgl.utils.logger import log
from pgl.layers.conv import gcn
from layers import sag_pool
from conv import norm_gcn
class GlobalModel(object):
"""Implementation of global pooling architecture with SAGPool.
"""
def __init__(self, args, dataset):
self.args = args
self.dataset = dataset
self.hidden_size = args.hidden_size
self.num_classes = args.num_classes
self.num_features = args.num_features
self.pooling_ratio = args.pooling_ratio
self.dropout_ratio = args.dropout_ratio
self.batch_size = args.batch_size
graph_data = []
g, label = self.dataset[0]
graph_data.append(g)
g, label = self.dataset[1]
graph_data.append(g)
batch_graph = MultiGraph(graph_data)
indegree = batch_graph.indegree()
norm = np.zeros_like(indegree, dtype="float32")
norm[indegree > 0] = np.power(indegree[indegree > 0], -0.5)
batch_graph.node_feat["norm"] = np.expand_dims(norm, -1)
graph_data = batch_graph
self.graph_wrapper = GraphWrapper(
name="graph",
node_feat=graph_data.node_feat_info()
)
self.labels = L.data(
"labels",
shape=[None, self.args.num_classes],
dtype="int32",
append_batch_size=False)
self.labels_1dim = L.data(
"labels_1dim",
shape=[None],
dtype="int32",
append_batch_size=False)
self.graph_id = L.data(
"graph_id",
shape=[None],
dtype="int32",
append_batch_size=False)
if self.args.dataset_name == "FRANKENSTEIN":
self.gcn = gcn
else:
self.gcn = norm_gcn
self.build_model()
def build_model(self):
node_features = self.graph_wrapper.node_feat["feat"]
output = self.gcn(gw=self.graph_wrapper,
feature=node_features,
hidden_size=self.hidden_size,
activation="relu",
norm=self.graph_wrapper.node_feat["norm"],
name="gcn_layer_1")
output1 = output
output = self.gcn(gw=self.graph_wrapper,
feature=output,
hidden_size=self.hidden_size,
activation="relu",
norm=self.graph_wrapper.node_feat["norm"],
name="gcn_layer_2")
output2 = output
output = self.gcn(gw=self.graph_wrapper,
feature=output,
hidden_size=self.hidden_size,
activation="relu",
norm=self.graph_wrapper.node_feat["norm"],
name="gcn_layer_3")
output = L.concat(input=[output1, output2, output], axis=-1)
output, ratio_length = sag_pool(gw=self.graph_wrapper,
feature=output,
ratio=self.pooling_ratio,
graph_id=self.graph_id,
dataset=self.args.dataset_name,
name="sag_pool_1")
output = L.lod_reset(output, self.graph_wrapper.graph_lod)
cat1 = L.sequence_pool(output, "sum")
ratio_length = L.cast(ratio_length, dtype="float32")
cat1 = L.elementwise_div(cat1, ratio_length, axis=-1)
cat2 = L.sequence_pool(output, "max")
output = L.concat(input=[cat2, cat1], axis=-1)
output = L.fc(output, size=self.hidden_size, act="relu")
output = L.dropout(output, dropout_prob=self.dropout_ratio)
output = L.fc(output, size=self.hidden_size // 2, act="relu")
output = L.fc(output, size=self.num_classes, act=None,
param_attr=fluid.ParamAttr(name="final_fc"))
self.labels = L.cast(self.labels, dtype="float32")
loss = L.sigmoid_cross_entropy_with_logits(x=output, label=self.labels)
self.loss = L.mean(loss)
pred = L.sigmoid(output)
self.pred = L.argmax(x=pred, axis=-1)
correct = L.equal(self.pred, self.labels_1dim)
correct = L.cast(correct, dtype="int32")
self.correct = L.reduce_sum(correct)
# DeeperGCN: All You Need to Train Deeper GCNs
see more information in https://arxiv.org/pdf/2006.07739.pdf
### Datasets
The datasets contain three citation networks: CORA, PUBMED, CITESEER. The details for these three datasets can be found in the [paper](https://arxiv.org/abs/1609.02907).
### Dependencies
- paddlepaddle>=1.6
- pgl
### Performance
We train our models for 200 epochs and report the accuracy on the test dataset.
| Dataset | Accuracy |
| --- | --- |
| Cora | ~77% |
### How to run
For examples, use gpu to train gat on cora dataset.
```
python train.py --dataset cora --use_cuda
```
#### Hyperparameters
- dataset: The citation dataset "cora", "citeseer", "pubmed".
- use_cuda: Use gpu if assign use_cuda.
import pgl
import paddle.fluid as fluid
def DeeperGCN(gw, feature, num_layers,
hidden_size, num_tasks, name, dropout_prob):
"""Implementation of DeeperGCN, see the paper
"DeeperGCN: All You Need to Train Deeper GCNs" in
https://arxiv.org/pdf/2006.07739.pdf
Args:
gw: Graph wrapper object
feature: A tensor with shape (num_nodes, feature_size)
num_layers: num of layers in DeeperGCN
hidden_size: hidden_size in DeeperGCN
num_tasks: final prediction
name: deeper gcn layer names
dropout_prob: dropout prob in DeeperGCN
Return:
A tensor with shape (num_nodes, hidden_size)
"""
beta = "dynamic"
feature = fluid.layers.fc(feature,
hidden_size,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_weight'))
output = pgl.layers.gen_conv(gw, feature, name=name+"_gen_conv_0", beta=beta)
for layer in range(num_layers):
# LN/BN->ReLU->GraphConv->Res
old_output = output
# 1. Layer Norm
output = fluid.layers.layer_norm(
output,
begin_norm_axis=1,
param_attr=fluid.ParamAttr(
name="norm_scale_%s_%d" % (name, layer),
initializer=fluid.initializer.Constant(1.0)),
bias_attr=fluid.ParamAttr(
name="norm_bias_%s_%d" % (name, layer),
initializer=fluid.initializer.Constant(0.0)))
# 2. ReLU
output = fluid.layers.relu(output)
#3. dropout
output = fluid.layers.dropout(output,
dropout_prob=dropout_prob,
dropout_implementation="upscale_in_train")
#4 gen_conv
output = pgl.layers.gen_conv(gw, output,
name=name+"_gen_conv_%d"%layer, beta=beta)
#5 res
output = output + old_output
# final layer: LN + relu + droput
output = fluid.layers.layer_norm(
output,
begin_norm_axis=1,
param_attr=fluid.ParamAttr(
name="norm_scale_%s_%d" % (name, num_layers),
initializer=fluid.initializer.Constant(1.0)),
bias_attr=fluid.ParamAttr(
name="norm_bias_%s_%d" % (name, num_layers),
initializer=fluid.initializer.Constant(0.0)))
output = fluid.layers.relu(output)
output = fluid.layers.dropout(output,
dropout_prob=dropout_prob,
dropout_implementation="upscale_in_train")
# final prediction
output = fluid.layers.fc(output,
num_tasks,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_final_weight'))
return output
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
#-*- coding: utf-8 -*-
import pgl
from pgl import data_loader
from pgl.utils.logger import log
import paddle.fluid as fluid
import numpy as np
import time
import argparse
from pgl.utils.log_writer import LogWriter # vdl
from model import DeeperGCN
def load(name):
if name == 'cora':
dataset = data_loader.CoraDataset()
elif name == "pubmed":
dataset = data_loader.CitationDataset("pubmed", symmetry_edges=False)
elif name == "citeseer":
dataset = data_loader.CitationDataset("citeseer", symmetry_edges=False)
else:
raise ValueError(name + " dataset doesn't exists")
return dataset
def main(args):
# vdl
writer = LogWriter("checkpoints/train_history")
dataset = load(args.dataset)
place = fluid.CUDAPlace(0) if args.use_cuda else fluid.CPUPlace()
train_program = fluid.Program()
startup_program = fluid.Program()
test_program = fluid.Program()
hidden_size = 64
num_layers = 7
with fluid.program_guard(train_program, startup_program):
gw = pgl.graph_wrapper.GraphWrapper(
name="graph",
node_feat=dataset.graph.node_feat_info())
output = DeeperGCN(gw,
gw.node_feat["words"],
num_layers,
hidden_size,
dataset.num_classes,
"deepercnn",
0.1)
node_index = fluid.layers.data(
"node_index",
shape=[None, 1],
dtype="int64",
append_batch_size=False)
node_label = fluid.layers.data(
"node_label",
shape=[None, 1],
dtype="int64",
append_batch_size=False)
pred = fluid.layers.gather(output, node_index)
loss, pred = fluid.layers.softmax_with_cross_entropy(
logits=pred, label=node_label, return_softmax=True)
acc = fluid.layers.accuracy(input=pred, label=node_label, k=1)
loss = fluid.layers.mean(loss)
test_program = train_program.clone(for_test=True)
with fluid.program_guard(train_program, startup_program):
adam = fluid.optimizer.Adam(
regularization=fluid.regularizer.L2DecayRegularizer(
regularization_coeff=0.0005),
learning_rate=0.005)
adam.minimize(loss)
exe = fluid.Executor(place)
exe.run(startup_program)
feed_dict = gw.to_feed(dataset.graph)
train_index = dataset.train_index
train_label = np.expand_dims(dataset.y[train_index], -1)
train_index = np.expand_dims(train_index, -1)
val_index = dataset.val_index
val_label = np.expand_dims(dataset.y[val_index], -1)
val_index = np.expand_dims(val_index, -1)
test_index = dataset.test_index
test_label = np.expand_dims(dataset.y[test_index], -1)
test_index = np.expand_dims(test_index, -1)
# get beta param
beta_param_list = []
for param in fluid.io.get_program_parameter(train_program):
if param.name.endswith("_beta"):
beta_param_list.append(param)
dur = []
for epoch in range(200):
if epoch >= 3:
t0 = time.time()
feed_dict["node_index"] = np.array(train_index, dtype="int64")
feed_dict["node_label"] = np.array(train_label, dtype="int64")
train_loss, train_acc = exe.run(train_program,
feed=feed_dict,
fetch_list=[loss, acc],
return_numpy=True)
for param in beta_param_list:
beta = np.array(fluid.global_scope().find_var(param.name).get_tensor())
writer.add_scalar("beta/"+param.name, beta, epoch)
if epoch >= 3:
time_per_epoch = 1.0 * (time.time() - t0)
dur.append(time_per_epoch)
feed_dict["node_index"] = np.array(val_index, dtype="int64")
feed_dict["node_label"] = np.array(val_label, dtype="int64")
val_loss, val_acc = exe.run(test_program,
feed=feed_dict,
fetch_list=[loss, acc],
return_numpy=True)
log.info("Epoch %d " % epoch + "(%.5lf sec) " % np.mean(dur) +
"Train Loss: %f " % train_loss + "Train Acc: %f " % train_acc
+ "Val Loss: %f " % val_loss + "Val Acc: %f " % val_acc)
feed_dict["node_index"] = np.array(test_index, dtype="int64")
feed_dict["node_label"] = np.array(test_label, dtype="int64")
test_loss, test_acc = exe.run(test_program,
feed=feed_dict,
fetch_list=[loss, acc],
return_numpy=True)
log.info("Accuracy: %f" % test_acc)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='DeeperGCN')
parser.add_argument(
"--dataset", type=str, default="cora", help="dataset (cora, pubmed)")
parser.add_argument("--use_cuda", action='store_true', help="use_cuda")
args = parser.parse_args()
log.info(args)
main(args)
......@@ -6,54 +6,32 @@ information (e.g., text attributes) to efficiently generate node embeddings for
For purpose of high scalability, we use redis as distribute graph storage solution and training graphsage against redis server.
### Datasets(Quickstart)
The reddit dataset should be downloaded from [reddit_adj.npz](https://drive.google.com/open?id=174vb0Ws7Vxk_QTUtxqTgDHSQ4El4qDHt) and [reddit.npz](https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2Jthe). The details for Reddit Dataset can be found [here](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf).
The reddit dataset should be downloaded from [reddit_adj.npz](https://drive.google.com/open?id=174vb0Ws7Vxk_QTUtxqTgDHSQ4El4qDHt) and [reddit.npz](https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2J). The details for Reddit Dataset can be found [here](https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf).
Alternatively, reddit dataset has been preprocessed and packed into docker image, which can be instantly pulled using following commands.
- reddit.npz: https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2J
- reddit_adj.npz: https://drive.google.com/open?id=174vb0Ws7Vxk_QTUtxqTgDHSQ4El4qDHt
```sh
docker pull githubutilities/reddit_redis_demo:v0.1
```
Download `reddit.npz` and `reddit_adj.npz` into `data` directory for further preprocessing.
### Dependencies
```txt
- paddlepaddle>=1.6
- pgl
- scipy
- redis==2.10.6
- redis-py-cluster==1.3.6
```sh
pip install -r requirements.txt
```
### How to run
#### 1. Start reddit data service
#### 1. Preprocessing and start reddit data service
```sh
docker run \
--net=host \
-d --rm \
--name reddit_demo \
-it githubutilities/reddit_redis_demo:v0.1 \
/bin/bash -c "/bin/bash ./before_hook.sh && /bin/bash"
docker logs -f `docker ps -aqf "name=reddit_demo"`
pushd ./redis_setup
/bin/bash ./before_hook.sh
popd
```
#### 2. training GraphSAGE model
```sh
python train.py --use_cuda --epoch 10 --graphsage_type graphsage_mean --sample_workers 10
sh ./cloud_run.sh
```
#### Hyperparameters
- epoch: Number of epochs default (10)
- use_cuda: Use gpu if assign use_cuda.
- graphsage_type: We support 4 aggregator types including "graphsage_mean", "graphsage_maxpool", "graphsage_meanpool" and "graphsage_lstm".
- sample_workers: The number of workers for multiprocessing subgraph sample.
- lr: Learning rate.
- batch_size: Batch size.
- samples_1: The max neighbors for the first hop neighbor sampling. (default: 25)
- samples_2: The max neighbors for the second hop neighbor sampling. (default: 10)
- hidden_size: The hidden size of the GraphSAGE models.
#!/bin/bash
set -x
mode=${1}
source ./utils.sh
unset http_proxy https_proxy
source ./local_config
if [ ! -d ${log_dir} ]; then
mkdir ${log_dir}
fi
for((i=0;i<${PADDLE_PSERVERS_NUM};i++))
do
echo "start ps server: ${i}"
echo $log_dir
TRAINING_ROLE="PSERVER" PADDLE_TRAINER_ID=${i} sh job.sh &> $log_dir/pserver.$i.log &
done
sleep 10s
for((j=0;j<${PADDLE_TRAINERS_NUM};j++))
do
echo "start ps work: ${j}"
TRAINING_ROLE="TRAINER" PADDLE_TRAINER_ID=${j} sh job.sh &> $log_dir/worker.$j.log &
done
tail -f $log_dir/worker.0.log
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import argparse
import time
import os
import math
import numpy as np
import paddle.fluid as F
import paddle.fluid.layers as L
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet
from paddle.fluid.transpiler.distribute_transpiler import DistributeTranspilerConfig
import paddle.fluid.incubate.fleet.base.role_maker as role_maker
from pgl.utils.logger import log
from model import GraphsageModel
from utils import load_config
import reader
def init_role():
# reset the place according to role of parameter server
training_role = os.getenv("TRAINING_ROLE", "TRAINER")
paddle_role = role_maker.Role.WORKER
place = F.CPUPlace()
if training_role == "PSERVER":
paddle_role = role_maker.Role.SERVER
# set the fleet runtime environment according to configure
ports = os.getenv("PADDLE_PORT", "6174").split(",")
pserver_ips = os.getenv("PADDLE_PSERVERS").split(",") # ip,ip...
eplist = []
if len(ports) > 1:
# local debug mode, multi port
for port in ports:
eplist.append(':'.join([pserver_ips[0], port]))
else:
# distributed mode, multi ip
for ip in pserver_ips:
eplist.append(':'.join([ip, ports[0]]))
pserver_endpoints = eplist # ip:port,ip:port...
worker_num = int(os.getenv("PADDLE_TRAINERS_NUM", "0"))
trainer_id = int(os.getenv("PADDLE_TRAINER_ID", "0"))
role = role_maker.UserDefinedRoleMaker(
current_id=trainer_id,
role=paddle_role,
worker_num=worker_num,
server_endpoints=pserver_endpoints)
fleet.init(role)
def optimization(base_lr, loss, optimizer='adam'):
if optimizer == 'sgd':
optimizer = F.optimizer.SGD(base_lr)
elif optimizer == 'adam':
optimizer = F.optimizer.Adam(base_lr, lazy_mode=True)
else:
raise ValueError
log.info('learning rate:%f' % (base_lr))
#create the DistributeTranspiler configure
config = DistributeTranspilerConfig()
config.sync_mode = False
#config.runtime_split_send_recv = False
config.slice_var_up = False
#create the distributed optimizer
optimizer = fleet.distributed_optimizer(optimizer, config)
optimizer.minimize(loss)
def build_complied_prog(train_program, model_loss):
num_threads = int(os.getenv("CPU_NUM", 10))
trainer_id = int(os.getenv("PADDLE_TRAINER_ID", 0))
exec_strategy = F.ExecutionStrategy()
exec_strategy.num_threads = num_threads
#exec_strategy.use_experimental_executor = True
build_strategy = F.BuildStrategy()
build_strategy.enable_inplace = True
#build_strategy.memory_optimize = True
build_strategy.memory_optimize = False
build_strategy.remove_unnecessary_lock = False
if num_threads > 1:
build_strategy.reduce_strategy = F.BuildStrategy.ReduceStrategy.Reduce
compiled_prog = F.compiler.CompiledProgram(
train_program).with_data_parallel(loss_name=model_loss.name)
return compiled_prog
def fake_py_reader(data_iter, num):
def fake_iter():
queue = []
for idx, data in enumerate(data_iter()):
queue.append(data)
if len(queue) == num:
yield queue
queue = []
if len(queue) > 0:
while len(queue) < num:
queue.append(queue[-1])
yield queue
return fake_iter
def train_prog(exe, program, model, pyreader, args):
trainer_id = int(os.getenv("PADDLE_TRAINER_ID", "0"))
start = time.time()
batch = 0
total_loss = 0.
total_acc = 0.
total_sample = 0
for epoch_idx in range(args.num_epoch):
for step, batch_feed_dict in enumerate(pyreader()):
try:
cpu_time = time.time()
batch += 1
batch_loss, batch_acc = exe.run(
program,
feed=batch_feed_dict,
fetch_list=[model.loss, model.acc])
end = time.time()
if batch % args.log_per_step == 0:
log.info(
"Batch %s Loss %s Acc %s \t Speed(per batch) %.5lf/%.5lf sec"
% (batch, np.mean(batch_loss), np.mean(batch_acc), (end - start) /batch, (end - cpu_time)))
if step % args.steps_per_save == 0:
save_path = args.save_path
if trainer_id == 0:
model_path = os.path.join(save_path, "%s" % step)
fleet.save_persistables(exe, model_path)
except Exception as e:
log.info("Pyreader train error")
log.exception(e)
def main(args):
log.info("start")
worker_num = int(os.getenv("PADDLE_TRAINERS_NUM", "0"))
num_devices = int(os.getenv("CPU_NUM", 10))
model = GraphsageModel(args)
loss = model.forward()
train_iter = reader.get_iter(args, model.graph_wrapper, 'train')
pyreader = fake_py_reader(train_iter, num_devices)
# init fleet
init_role()
optimization(args.lr, loss, args.optimizer)
# init and run server or worker
if fleet.is_server():
fleet.init_server(args.warm_start_from_dir)
fleet.run_server()
if fleet.is_worker():
log.info("start init worker done")
fleet.init_worker()
#just the worker, load the sample
log.info("init worker done")
exe = F.Executor(F.CPUPlace())
exe.run(fleet.startup_program)
log.info("Startup done")
compiled_prog = build_complied_prog(fleet.main_program, loss)
train_prog(exe, compiled_prog, model, pyreader, args)
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='metapath2vec')
parser.add_argument("-c", "--config", type=str, default="./config.yaml")
args = parser.parse_args()
config = load_config(args.config)
log.info(config)
main(config)
# model config
hidden_size: 128
num_class: 41
samples: [25, 10]
graphsage_type: "graphsage_mean"
# trainging config
num_epoch: 10
batch_size: 128
num_sample_workers: 10
optimizer: "adam"
lr: 0.01
warm_start_from_dir: null
steps_per_save: 1000
log_per_step: 1
save_path: "./checkpoints"
log_dir: "./logs"
CPU_NUM: 1
#!/bin/bash
set -x
source ./utils.sh
export CPU_NUM=$CPU_NUM
export FLAGS_rpc_deadline=3000000
export FLAGS_communicator_send_queue_size=1
export FLAGS_communicator_min_send_grad_num_before_recv=0
export FLAGS_communicator_max_merge_var_num=1
export FLAGS_communicator_merge_sparse_grad=0
python -u cluster_train.py -c config.yaml
#!/bin/bash
export PADDLE_TRAINERS_NUM=2
export PADDLE_PSERVERS_NUM=2
export PADDLE_PORT=6184,6185
export PADDLE_PSERVERS="127.0.0.1"
......@@ -11,10 +11,22 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
graphsage model.
"""
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import math
import pgl
import numpy as np
import paddle
import paddle.fluid.layers as L
import paddle.fluid as F
import paddle.fluid as fluid
def copy_send(src_feat, dst_feat, edge_feat):
return src_feat["h"]
......@@ -128,3 +140,87 @@ def graphsage_lstm(gw, feature, hidden_size, act, name):
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
return output
def build_graph_model(graph_wrapper, num_class, k_hop, graphsage_type,
hidden_size):
node_index = fluid.layers.data(
"node_index", shape=[None], dtype="int64", append_batch_size=False)
node_label = fluid.layers.data(
"node_label", shape=[None, 1], dtype="int64", append_batch_size=False)
#feature = fluid.layers.gather(feature, graph_wrapper.node_feat['feats'])
feature = graph_wrapper.node_feat['feats']
feature.stop_gradient = True
for i in range(k_hop):
if graphsage_type == 'graphsage_mean':
feature = graphsage_mean(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_mean_%s" % i)
elif graphsage_type == 'graphsage_meanpool':
feature = graphsage_meanpool(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_meanpool_%s" % i)
elif graphsage_type == 'graphsage_maxpool':
feature = graphsage_maxpool(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_maxpool_%s" % i)
elif graphsage_type == 'graphsage_lstm':
feature = graphsage_lstm(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_maxpool_%s" % i)
else:
raise ValueError("graphsage type %s is not"
" implemented" % graphsage_type)
feature = fluid.layers.gather(feature, node_index)
logits = fluid.layers.fc(feature,
num_class,
act=None,
name='classification_layer')
proba = fluid.layers.softmax(logits)
loss = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=node_label)
loss = fluid.layers.mean(loss)
acc = fluid.layers.accuracy(input=proba, label=node_label, k=1)
return loss, acc
class GraphsageModel(object):
def __init__(self, args):
self.args = args
def forward(self):
args = self.args
graph_wrapper = pgl.graph_wrapper.GraphWrapper(
"sub_graph", node_feat=[('feats', [None, 602], np.dtype('float32'))])
loss, acc = build_graph_model(
graph_wrapper,
num_class=args.num_class,
hidden_size=args.hidden_size,
graphsage_type=args.graphsage_type,
k_hop=len(args.samples))
loss.persistable = True
self.graph_wrapper = graph_wrapper
self.loss = loss
self.acc = acc
return loss
......@@ -11,6 +11,8 @@
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import sys
import numpy as np
import pickle as pkl
import paddle
......@@ -147,3 +149,48 @@ def multiprocess_graph_reader(
return reader()
def load_data():
"""
data from https://github.com/matenure/FastGCN/issues/8
reddit.npz: https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2J
reddit_index_label is preprocess from reddit.npz without feats key.
"""
data_dir = os.path.dirname(os.path.abspath(__file__))
data = np.load(os.path.join(data_dir, "data/reddit_index_label.npz"))
num_class = 41
train_label = data['y_train']
val_label = data['y_val']
test_label = data['y_test']
train_index = data['train_index']
val_index = data['val_index']
test_index = data['test_index']
return {
"train_index": train_index,
"train_label": train_label,
"val_label": val_label,
"val_index": val_index,
"test_index": test_index,
"test_label": test_label,
"num_class": 41
}
def get_iter(args, graph_wrapper, mode):
data = load_data()
train_iter = multiprocess_graph_reader(
graph_wrapper,
samples=args.samples,
num_workers=args.num_sample_workers,
batch_size=args.batch_size,
node_index=data['train_index'],
node_label=data["train_label"])
return train_iter
if __name__ == '__main__':
for e in train_iter():
print(e)
#!/bin/bash
set -x
srcdir=./src
# Data preprocessing
python ./src/preprocess.py
# Download and compile redis
export PATH=$PWD/redis-5.0.5/src:$PATH
if [ ! -f ./redis.tar.gz ]; then
curl https://codeload.github.com/antirez/redis/tar.gz/5.0.5 -o ./redis.tar.gz
fi
tar -xzf ./redis.tar.gz
cd ./redis-5.0.5/
make
cd -
# Install python deps
python -m pip install -U pip
pip install -r ./src/requirements.txt -U
# Run redis server
sh ./src/run_server.sh
# Dumping data into redis
source ./redis_graph.cfg
sh ./src/dump_data.sh $edge_path $server_list $num_nodes $node_feat_path
exit 0
# dump config
edge_path=../data/edge.txt
node_feat_path=../data/feats.npz
num_nodes=232965
server_list=./server.list
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import json
import logging
from collections import defaultdict
import tqdm
import redis
from redis._compat import b, unicode, bytes, long, basestring
from rediscluster.nodemanager import NodeManager
from rediscluster.crc import crc16
import argparse
import time
import pickle
import numpy as np
import scipy.sparse as sp
log = logging.getLogger(__name__)
root = logging.getLogger()
root.setLevel(logging.DEBUG)
handler = logging.StreamHandler(sys.stdout)
handler.setLevel(logging.DEBUG)
formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
handler.setFormatter(formatter)
root.addHandler(handler)
def encode(value):
"""
Return a bytestring representation of the value.
This method is copied from Redis' connection.py:Connection.encode
"""
if isinstance(value, bytes):
return value
elif isinstance(value, (int, long)):
value = b(str(value))
elif isinstance(value, float):
value = b(repr(value))
elif not isinstance(value, basestring):
value = unicode(value)
if isinstance(value, unicode):
value = value.encode('utf-8')
return value
def crc16_hash(data):
return crc16(encode(data))
def get_redis(startup_host, startup_port):
startup_nodes = [{"host": startup_host, "port": startup_port}, ]
nodemanager = NodeManager(startup_nodes=startup_nodes)
nodemanager.initialize()
rs = {}
for node, config in nodemanager.nodes.items():
rs[node] = redis.Redis(
host=config["host"], port=config["port"], decode_responses=False)
return rs, nodemanager
def load_data(edge_path):
src, dst = [], []
with open(edge_path, "r") as f:
for i in tqdm.tqdm(f):
s, d, _ = i.split()
s = int(s)
d = int(d)
src.append(s)
dst.append(d)
dst.append(s)
src.append(d)
src = np.array(src, dtype="int64")
dst = np.array(dst, dtype="int64")
return src, dst
def build_edge_index(edge_path, num_nodes, startup_host, startup_port,
num_bucket):
#src, dst = load_data(edge_path)
rs, nodemanager = get_redis(startup_host, startup_port)
dst_mp, edge_mp = defaultdict(list), defaultdict(list)
with open(edge_path) as f:
for l in tqdm.tqdm(f):
a, b, idx = l.rstrip().split('\t')
a, b, idx = int(a), int(b), int(idx)
dst_mp[a].append(b)
edge_mp[a].append(idx)
part_dst_dicts = {}
for i in tqdm.tqdm(range(num_nodes)):
#if len(edge_index.v[i]) == 0:
# continue
#v = edge_index.v[i].astype("int64").reshape([-1, 1])
#e = edge_index.eid[i].astype("int64").reshape([-1, 1])
if i not in dst_mp:
continue
v = np.array(dst_mp[i]).astype('int64').reshape([-1, 1])
e = np.array(edge_mp[i]).astype('int64').reshape([-1, 1])
o = np.hstack([v, e])
key = "d:%s" % i
part = crc16_hash(key) % num_bucket
if part not in part_dst_dicts:
part_dst_dicts[part] = {}
dst_dicts = part_dst_dicts[part]
dst_dicts["d:%s" % i] = o.tobytes()
if len(dst_dicts) > 10000:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, dst_dicts)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_dst_dicts[part] = {}
for part, dst_dicts in part_dst_dicts.items():
if len(dst_dicts) > 0:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, dst_dicts)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_dst_dicts[part] = {}
log.info("dst_dict Done")
def build_edge_id(edge_path, num_nodes, startup_host, startup_port,
num_bucket):
src, dst = load_data(edge_path)
rs, nodemanager = get_redis(startup_host, startup_port)
part_edge_dict = {}
for i in tqdm.tqdm(range(len(src))):
key = "e:%s" % i
part = crc16_hash(key) % num_bucket
if part not in part_edge_dict:
part_edge_dict[part] = {}
edge_dict = part_edge_dict[part]
edge_dict["e:%s" % i] = int(src[i]) * num_nodes + int(dst[i])
if len(edge_dict) > 10000:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, edge_dict)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_edge_dict[part] = {}
for part, edge_dict in part_edge_dict.items():
if len(edge_dict) > 0:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, edge_dict)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_edge_dict[part] = {}
def build_infos(edge_path, num_nodes, startup_host, startup_port, num_bucket):
src, dst = load_data(edge_path)
rs, nodemanager = get_redis(startup_host, startup_port)
slot = nodemanager.keyslot("num_nodes")
node = nodemanager.slots[slot][0]['name']
res = rs[node].set("num_nodes", num_nodes)
slot = nodemanager.keyslot("num_edges")
node = nodemanager.slots[slot][0]['name']
rs[node].set("num_edges", len(src))
slot = nodemanager.keyslot("nf:infos")
node = nodemanager.slots[slot][0]['name']
rs[node].set("nf:infos", json.dumps([['feats', [-1, 602], 'float32'], ]))
slot = nodemanager.keyslot("ef:infos")
node = nodemanager.slots[slot][0]['name']
rs[node].set("ef:infos", json.dumps([]))
def build_node_feat(node_feat_path, num_nodes, startup_host, startup_port, num_bucket):
assert node_feat_path != "", "node_feat_path empty!"
feat_dict = np.load(node_feat_path)
for k in feat_dict.keys():
feat = feat_dict[k]
assert feat.shape[0] == num_nodes, "num_nodes invalid"
rs, nodemanager = get_redis(startup_host, startup_port)
part_feat_dict = {}
for k in feat_dict.keys():
feat = feat_dict[k]
for i in tqdm.tqdm(range(num_nodes)):
key = "nf:%s:%i" % (k, i)
value = feat[i].tobytes()
part = crc16_hash(key) % num_bucket
if part not in part_feat_dict:
part_feat_dict[part] = {}
part_feat = part_feat_dict[part]
part_feat[key] = value
if len(part_feat) > 100:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, part_feat)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_feat_dict[part] = {}
for part, part_feat in part_feat_dict.items():
if len(part_feat) > 0:
slot = nodemanager.keyslot("part-%s" % part)
node = nodemanager.slots[slot][0]['name']
while True:
res = rs[node].hmset("part-%s" % part, part_feat)
if res:
break
log.info("HMSET FAILED RETRY connected %s" % node)
time.sleep(1)
part_feat_dict[part] = {}
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='gen_redis_conf')
parser.add_argument('--startup_port', type=int, required=True)
parser.add_argument('--startup_host', type=str, required=True)
parser.add_argument('--edge_path', type=str, default="")
parser.add_argument('--node_feat_path', type=str, default="")
parser.add_argument('--num_nodes', type=int, default=0)
parser.add_argument('--num_bucket', type=int, default=64)
parser.add_argument(
'--mode',
type=str,
required=True,
help="choose one of the following modes (clear, edge_index, edge_id, graph_attr)"
)
args = parser.parse_args()
log.info("Mode: {}".format(args.mode))
if args.mode == 'edge_index':
build_edge_index(args.edge_path, args.num_nodes, args.startup_host,
args.startup_port, args.num_bucket)
elif args.mode == 'edge_id':
build_edge_id(args.edge_path, args.num_nodes, args.startup_host,
args.startup_port, args.num_bucket)
elif args.mode == 'graph_attr':
build_infos(args.edge_path, args.num_nodes, args.startup_host,
args.startup_port, args.num_bucket)
elif args.mode == 'node_feat':
build_node_feat(args.node_feat_path, args.num_nodes, args.startup_host,
args.startup_port, args.num_bucket)
else:
raise ValueError("%s mode not found" % args.mode)
filter(){
lines=`cat $1`
rm $1
for line in $lines; do
remote_host=`echo $line | cut -d":" -f1`
remote_port=`echo $line | cut -d":" -f2`
nc -z $remote_host $remote_port
if [[ $? == 0 ]]; then
echo $line >> $1
fi
done
}
dump_data(){
filter $server_list
python ./src/start_cluster.py --server_list $server_list --replicas 0
address=`head -n 1 $server_list`
ip=`echo $address | cut -d":" -f1`
port=`echo $address | cut -d":" -f2`
python ./src/build_graph.py --startup_host $ip \
--startup_port $port \
--mode node_feat \
--node_feat_path $feat_fn \
--num_nodes $num_nodes
# build edge index
python ./src/build_graph.py --startup_host $ip \
--startup_port $port \
--mode edge_index \
--edge_path $edge_path \
--num_nodes $num_nodes
# build edge id
#python ./src/build_graph.py --startup_host $ip \
# --startup_port $port \
# --mode edge_id \
# --edge_path $edge_path \
# --num_nodes $num_nodes
# build graph attr
python ./src/build_graph.py --startup_host $ip \
--startup_port $port \
--mode graph_attr \
--edge_path $edge_path \
--num_nodes $num_nodes
}
if [ $# -ne 4 ]; then
echo 'sh edge_path server_list num_nodes feat_fn'
exit
fi
num_nodes=$3
server_list=$2
edge_path=$1
feat_fn=$4
dump_data
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import sys
import socket
import argparse
import os
temp = """port %s
bind %s
daemonize yes
pidfile /var/run/redis_%s.pid
cluster-enabled yes
cluster-config-file nodes.conf
cluster-node-timeout 50000
logfile "redis.log"
appendonly yes"""
def gen_config(ports):
if len(ports) == 0:
raise ValueError("No ports")
ip = socket.gethostbyname(socket.gethostname())
print("Generate redis conf")
for port in ports:
try:
os.mkdir("%s" % port)
except:
print("port %s directory already exists" % port)
pass
with open("%s/redis.conf" % port, 'w') as f:
f.write(temp % (port, ip, port))
print("Generate Start Server Scripts")
with open("start_server.sh", "w") as f:
f.write("set -x\n")
for ind, port in enumerate(ports):
f.write("# %s %s start\n" % (ip, port))
if ind > 0:
f.write("cd ..\n")
f.write("cd %s\n" % port)
f.write("redis-server redis.conf\n")
f.write("\n")
print("Generate Stop Server Scripts")
with open("stop_server.sh", "w") as f:
f.write("set -x\n")
for ind, port in enumerate(ports):
f.write("# %s %s shutdown\n" % (ip, port))
f.write("redis-cli -h %s -p %s shutdown\n" % (ip, port))
f.write("\n")
with open("server.list", "w") as f:
for ind, port in enumerate(ports):
f.write("%s:%s\n" % (ip, port))
if __name__ == '__main__':
parser = argparse.ArgumentParser(description='gen_redis_conf')
parser.add_argument('--ports', nargs='+', type=int, default=[])
args = parser.parse_args()
gen_config(args.ports)
import os
import sys
import numpy as np
import scipy.sparse as sp
def _load_config(fn):
ret = {}
with open(fn) as f:
for l in f:
if l.strip() == '' or l.startswith('#'):
continue
k, v = l.strip().split('=')
ret[k] = v
return ret
def _prepro(config):
data = np.load("../data/reddit.npz")
adj = sp.load_npz("../data/reddit_adj.npz")
adj = adj.tocoo()
src = adj.row
dst = adj.col
with open(config['edge_path'], 'w') as f:
for idx, e in enumerate(zip(src, dst)):
s, d = e
l = "{}\t{}\t{}\n".format(s, d, idx)
f.write(l)
feats = data['feats'].astype(np.float32)
np.savez(config['node_feat_path'], feats=feats)
if __name__ == '__main__':
config = _load_config('./redis_graph.cfg')
_prepro(config)
numpy
scipy
tqdm
redis==2.10.6
redis-py-cluster==1.3.6
start_server(){
ports=""
for i in {7430..7439}; do
nc -z localhost $i
if [[ $? != 0 ]]; then
ports="$ports $i"
fi
done
python ./src/gen_redis_conf.py --ports $ports
bash ./start_server.sh #启动服务器
}
start_server
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import argparse
def build_clusters(server_list, replicas):
servers = []
with open(server_list) as f:
for line in f:
servers.append(line.strip())
cmd = "echo yes | redis-cli --cluster create"
for server in servers:
cmd += ' %s ' % server
cmd += '--cluster-replicas %s' % replicas
print(cmd)
os.system(cmd)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='start_cluster')
parser.add_argument('--server_list', type=str, required=True)
parser.add_argument('--replicas', type=int, default=0)
args = parser.parse_args()
build_clusters(args.server_list, args.replicas)
#!/bin/bash
source ./redis_graph.cfg
url=`head -n1 $server_list`
shuf $edge_path | head -n 1000 | python ./test/test_redis_graph.py $url
#!/usr/bin/env python
# -*- coding: utf-8 -*-
########################################################################
#
# Copyright (c) 2019 Baidu.com, Inc. All Rights Reserved
#
# File: test_redis_graph.py
# Author: suweiyue(suweiyue@baidu.com)
# Date: 2019/08/19 16:28:18
#
########################################################################
"""
Comment.
"""
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import sys
import numpy as np
import tqdm
from pgl.redis_graph import RedisGraph
if __name__ == '__main__':
host, port = sys.argv[1].split(':')
port = int(port)
redis_configs = [{"host": host, "port": port}, ]
graph = RedisGraph("reddit-graph", redis_configs, num_parts=64)
#nodes = np.arange(0, 100)
#for i in range(0, 100):
for l in tqdm.tqdm(sys.stdin):
l_sp = l.rstrip().split('\t')
if len(l_sp) != 2:
continue
i, j = int(l_sp[0]), int(l_sp[1])
nodes = graph.sample_predecessor(np.array([i]), 10000)
assert j in nodes
pgl==1.1.0
pyyaml
paddlepaddle==1.6.1
scipy
redis==2.10.6
redis-py-cluster==1.3.6
......
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import os
import argparse
import time
import numpy as np
import scipy.sparse as sp
from sklearn.preprocessing import StandardScaler
import pgl
from pgl.utils.logger import log
from pgl.utils import paddle_helper
import paddle
import paddle.fluid as fluid
import reader
from model import graphsage_mean, graphsage_meanpool,\
graphsage_maxpool, graphsage_lstm
def load_data():
"""
data from https://github.com/matenure/FastGCN/issues/8
reddit.npz: https://drive.google.com/open?id=19SphVl_Oe8SJ1r87Hr5a6znx3nJu1F2J
reddit_index_label is preprocess from reddit.npz without feats key.
"""
data_dir = os.path.dirname(os.path.abspath(__file__))
data = np.load(os.path.join(data_dir, "data/reddit_index_label.npz"))
num_class = 41
train_label = data['y_train']
val_label = data['y_val']
test_label = data['y_test']
train_index = data['train_index']
val_index = data['val_index']
test_index = data['test_index']
return {
"train_index": train_index,
"train_label": train_label,
"val_label": val_label,
"val_index": val_index,
"test_index": test_index,
"test_label": test_label,
"num_class": 41
}
def build_graph_model(graph_wrapper, num_class, k_hop, graphsage_type,
hidden_size):
node_index = fluid.layers.data(
"node_index", shape=[None], dtype="int64", append_batch_size=False)
node_label = fluid.layers.data(
"node_label", shape=[None, 1], dtype="int64", append_batch_size=False)
#feature = fluid.layers.gather(feature, graph_wrapper.node_feat['feats'])
feature = graph_wrapper.node_feat['feats']
feature.stop_gradient = True
for i in range(k_hop):
if graphsage_type == 'graphsage_mean':
feature = graphsage_mean(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_mean_%s" % i)
elif graphsage_type == 'graphsage_meanpool':
feature = graphsage_meanpool(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_meanpool_%s" % i)
elif graphsage_type == 'graphsage_maxpool':
feature = graphsage_maxpool(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_maxpool_%s" % i)
elif graphsage_type == 'graphsage_lstm':
feature = graphsage_lstm(
graph_wrapper,
feature,
hidden_size,
act="relu",
name="graphsage_maxpool_%s" % i)
else:
raise ValueError("graphsage type %s is not"
" implemented" % graphsage_type)
feature = fluid.layers.gather(feature, node_index)
logits = fluid.layers.fc(feature,
num_class,
act=None,
name='classification_layer')
proba = fluid.layers.softmax(logits)
loss = fluid.layers.softmax_with_cross_entropy(
logits=logits, label=node_label)
loss = fluid.layers.mean(loss)
acc = fluid.layers.accuracy(input=proba, label=node_label, k=1)
return loss, acc
def run_epoch(batch_iter,
exe,
program,
prefix,
model_loss,
model_acc,
epoch,
log_per_step=100):
batch = 0
total_loss = 0.
total_acc = 0.
total_sample = 0
start = time.time()
for batch_feed_dict in batch_iter():
batch += 1
batch_loss, batch_acc = exe.run(program,
fetch_list=[model_loss, model_acc],
feed=batch_feed_dict)
if batch % log_per_step == 0:
log.info("Batch %s %s-Loss %s %s-Acc %s" %
(batch, prefix, batch_loss, prefix, batch_acc))
num_samples = len(batch_feed_dict["node_index"])
total_loss += batch_loss * num_samples
total_acc += batch_acc * num_samples
total_sample += num_samples
end = time.time()
log.info("%s Epoch %s Loss %.5lf Acc %.5lf Speed(per batch) %.5lf sec" %
(prefix, epoch, total_loss / total_sample,
total_acc / total_sample, (end - start) / batch))
def main(args):
data = load_data()
log.info("preprocess finish")
log.info("Train Examples: %s" % len(data["train_index"]))
log.info("Val Examples: %s" % len(data["val_index"]))
log.info("Test Examples: %s" % len(data["test_index"]))
place = fluid.CUDAPlace(0) if args.use_cuda else fluid.CPUPlace()
train_program = fluid.Program()
startup_program = fluid.Program()
samples = []
if args.samples_1 > 0:
samples.append(args.samples_1)
if args.samples_2 > 0:
samples.append(args.samples_2)
with fluid.program_guard(train_program, startup_program):
graph_wrapper = pgl.graph_wrapper.GraphWrapper(
"sub_graph", node_feat=[('feats', [None, 602], np.dtype('float32'))])
model_loss, model_acc = build_graph_model(
graph_wrapper,
num_class=data["num_class"],
hidden_size=args.hidden_size,
graphsage_type=args.graphsage_type,
k_hop=len(samples))
test_program = train_program.clone(for_test=True)
with fluid.program_guard(train_program, startup_program):
adam = fluid.optimizer.Adam(learning_rate=args.lr)
adam.minimize(model_loss)
exe = fluid.Executor(place)
exe.run(startup_program)
train_iter = reader.multiprocess_graph_reader(
graph_wrapper,
samples=samples,
num_workers=args.sample_workers,
batch_size=args.batch_size,
node_index=data['train_index'],
node_label=data["train_label"])
val_iter = reader.multiprocess_graph_reader(
graph_wrapper,
samples=samples,
num_workers=args.sample_workers,
batch_size=args.batch_size,
node_index=data['val_index'],
node_label=data["val_label"])
test_iter = reader.multiprocess_graph_reader(
graph_wrapper,
samples=samples,
num_workers=args.sample_workers,
batch_size=args.batch_size,
node_index=data['test_index'],
node_label=data["test_label"])
for epoch in range(args.epoch):
run_epoch(
train_iter,
program=train_program,
exe=exe,
prefix="train",
model_loss=model_loss,
model_acc=model_acc,
log_per_step=1,
epoch=epoch)
run_epoch(
val_iter,
program=test_program,
exe=exe,
prefix="val",
model_loss=model_loss,
model_acc=model_acc,
log_per_step=10000,
epoch=epoch)
run_epoch(
test_iter,
program=test_program,
prefix="test",
exe=exe,
model_loss=model_loss,
model_acc=model_acc,
log_per_step=10000,
epoch=epoch)
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='graphsage')
parser.add_argument("--use_cuda", action='store_true', help="use_cuda")
parser.add_argument(
"--normalize", action='store_true', help="normalize features")
parser.add_argument(
"--symmetry", action='store_true', help="undirect graph")
parser.add_argument("--graphsage_type", type=str, default="graphsage_mean")
parser.add_argument("--sample_workers", type=int, default=10)
parser.add_argument("--epoch", type=int, default=10)
parser.add_argument("--hidden_size", type=int, default=128)
parser.add_argument("--batch_size", type=int, default=128)
parser.add_argument("--lr", type=float, default=0.01)
parser.add_argument("--samples_1", type=int, default=25)
parser.add_argument("--samples_2", type=int, default=10)
args = parser.parse_args()
log.info(args)
main(args)
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Implementation of some helper functions"""
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import os
import time
import yaml
import numpy as np
from pgl.utils.logger import log
class AttrDict(dict):
"""Attr dict
"""
def __init__(self, d):
self.dict = d
def __getattr__(self, attr):
value = self.dict[attr]
if isinstance(value, dict):
return AttrDict(value)
else:
return value
def __str__(self):
return str(self.dict)
def load_config(config_file):
"""Load config file"""
with open(config_file) as f:
if hasattr(yaml, 'FullLoader'):
config = yaml.load(f, Loader=yaml.FullLoader)
else:
config = yaml.load(f)
return AttrDict(config)
# parse yaml file
function parse_yaml {
local prefix=$2
local s='[[:space:]]*' w='[a-zA-Z0-9_]*' fs=$(echo @|tr @ '\034')
sed -ne "s|^\($s\):|\1|" \
-e "s|^\($s\)\($w\)$s:$s[\"']\(.*\)[\"']$s\$|\1$fs\2$fs\3|p" \
-e "s|^\($s\)\($w\)$s:$s\(.*\)$s\$|\1$fs\2$fs\3|p" $1 |
awk -F$fs '{
indent = length($1)/2;
vname[indent] = $2;
for (i in vname) {if (i > indent) {delete vname[i]}}
if (length($3) > 0) {
vn=""; for (i=0; i<indent; i++) {vn=(vn)(vname[i])("_")}
printf("%s%s%s=\"%s\"\n", "'$prefix'",vn, $2, $3);
}
}'
}
eval $(parse_yaml "$(dirname "${BASH_SOURCE}")"/config.yaml)
......@@ -31,7 +31,7 @@ final_fc: true
final_l2_norm: true
loss_type: "hinge"
margin: 0.3
neg_type: "random_neg"
neg_type: "batch_neg"
# infer config ------
infer_model: "./output/last"
......@@ -49,7 +49,7 @@ ernie_config:
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
sent_type_vocab_size: 4
sent_type_vocab_size: 2
task_type_vocab_size: 3
vocab_size: 18000
use_task_id: false
......
......@@ -31,7 +31,7 @@ final_fc: true
final_l2_norm: true
loss_type: "hinge"
margin: 0.3
neg_type: "random_neg"
neg_type: "batch_neg"
# infer config ------
infer_model: "./output/last"
......@@ -49,7 +49,7 @@ ernie_config:
max_position_embeddings: 513
num_attention_heads: 12
num_hidden_layers: 12
sent_type_vocab_size: 4
sent_type_vocab_size: 2
task_type_vocab_size: 3
vocab_size: 18000
use_task_id: false
......
......@@ -24,7 +24,7 @@ from pgl.sample import edge_hash
class GraphGenerator(BaseDataGenerator):
def __init__(self, graph_wrappers, data, batch_size, samples,
num_workers, feed_name_list, use_pyreader,
phase, graph_data_path, shuffle=True, buf_size=1000):
phase, graph_data_path, shuffle=True, buf_size=1000, neg_type="batch_neg"):
super(GraphGenerator, self).__init__(
buf_size=buf_size,
......@@ -40,6 +40,7 @@ class GraphGenerator(BaseDataGenerator):
self.phase = phase
self.load_graph(graph_data_path)
self.num_layers = len(graph_wrappers)
self.neg_type= neg_type
def load_graph(self, graph_data_path):
self.graph = pgl.graph.MemmapGraph(graph_data_path)
......@@ -72,7 +73,11 @@ class GraphGenerator(BaseDataGenerator):
batch_src = np.array(batch_src, dtype="int64")
batch_dst = np.array(batch_dst, dtype="int64")
sampled_batch_neg = alias_sample(batch_dst.shape, self.alias, self.events)
if self.neg_type == "batch_neg":
neg_shape = [1]
else:
neg_shape = batch_dst.shape
sampled_batch_neg = alias_sample(neg_shape, self.alias, self.events)
if len(batch_neg) > 0:
batch_neg = np.concatenate([batch_neg, sampled_batch_neg], 0)
......@@ -80,6 +85,7 @@ class GraphGenerator(BaseDataGenerator):
batch_neg = sampled_batch_neg
if self.phase == "train":
#ignore_edges = np.concatenate([np.stack([batch_src, batch_dst], 1), np.stack([batch_dst, batch_src], 1)], 0)
ignore_edges = set()
else:
ignore_edges = set()
......@@ -99,7 +105,7 @@ class GraphGenerator(BaseDataGenerator):
feed_dict["user_index"] = np.array(sub_src_idx, dtype="int64")
feed_dict["item_index"] = np.array(sub_dst_idx, dtype="int64")
feed_dict["neg_item_index"] = np.array(sub_neg_idx, dtype="int64")
feed_dict["term_ids"] = self.term_ids[subgraphs[0].node_feat["index"]]
feed_dict["term_ids"] = self.term_ids[subgraphs[0].node_feat["index"]].astype(np.int64)
return feed_dict
def __call__(self):
......
......@@ -59,8 +59,7 @@ def run_predict(py_reader,
log_per_step=1,
args=None):
if args.input_type == "text":
id2str = np.load(os.path.join(args.graph_path, "id2str.npy"), mmap_mode="r")
id2str = io.open(os.path.join(args.graph_path, "terms.txt"), encoding=args.encoding).readlines()
trainer_id = int(os.getenv("PADDLE_TRAINER_ID", "0"))
trainer_count = int(os.getenv("PADDLE_TRAINERS_NUM", "1"))
......@@ -82,7 +81,7 @@ def run_predict(py_reader,
for ufs, _, sri in zip(batch_usr_feat, batch_ad_feat, batch_src_real_index):
if args.input_type == "text":
sri = id2str[int(sri)]
sri = id2str[int(sri)].strip("\n")
line = "{}\t{}\n".format(sri, tostr(ufs))
fout.write(line)
......
......@@ -17,6 +17,7 @@ role = os.getenv("TRAINING_ROLE", "TRAINER")
import numpy as np
from pgl.utils.logger import log
from pgl.utils.log_writer import LogWriter
import paddle.fluid as F
import paddle.fluid.layers as L
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import StrategyFactory
......@@ -25,7 +26,6 @@ from paddle.fluid.transpiler.distribute_transpiler import DistributeTranspilerCo
from paddle.fluid.incubate.fleet.collective import fleet as cfleet
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler import fleet as tfleet
import paddle.fluid.incubate.fleet.base.role_maker as role_maker
from tensorboardX import SummaryWriter
from paddle.fluid.transpiler.distribute_transpiler import DistributedMode
from paddle.fluid.incubate.fleet.parameter_server.distribute_transpiler.distributed_strategy import TrainerRuntimeConfig
......@@ -77,7 +77,7 @@ class Learner(object):
start = time.time()
trainer_id = int(os.getenv("PADDLE_TRAINER_ID", "0"))
if trainer_id == 0:
writer = SummaryWriter(os.path.join(self.config.output_path, "train_history"))
writer = LogWriter(os.path.join(self.config.output_path, "train_history"))
for epoch_idx in range(self.config.epoch):
for idx, batch_feed_dict in enumerate(self.model.data_loader()):
......
......@@ -191,12 +191,12 @@ def all_gather(X):
for i in range(trainer_num):
copy_X = X * 1
copy_X = L.collective._broadcast(copy_X, i, True)
copy_X.stop_gradients=True
copy_X.stop_gradient=True
Xs.append(copy_X)
if len(Xs) > 1:
Xs=L.concat(Xs, 0)
Xs.stop_gradients=True
Xs.stop_gradient=True
else:
Xs = Xs[0]
return Xs
......
......@@ -104,7 +104,7 @@ class ErnieModel(object):
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.logical_not(L.equal(src_ids,
zero)) # assume pad id == 0
input_mask = L.cast(input_mask, 'float')
input_mask = L.cast(input_mask, 'float32')
input_mask.stop_gradient = True
return input_mask
......@@ -342,7 +342,7 @@ class ErnieGraphModel(ErnieModel):
L.range(
0, slot_seqlen, 1, dtype='int32'), [1, slot_seqlen, 1],
inplace=True) # [1, slot_seqlen, 1]
a_position_ids = L.expand(a_position_ids, [src_batch, 1, 1]) # [B, slot_seqlen * num_b, 1]
a_position_ids = L.expand(a_position_ids, [src_batch, 1, 1]) # [B, slot_seqlen, 1]
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.cast(L.equal(src_ids[:, :slot_seqlen], zero), "int32") # assume pad id == 0 [B, slot_seqlen, 1]
......
......@@ -455,18 +455,6 @@ def graph_encoder(enc_input,
attn_bias = build_graph_attn_bias(input_mask, n_head, enc_input.dtype, slot_seqlen)
#attn_bias = build_attn_bias(input_mask, n_head, enc_input.dtype)
# d_batch = d_shape[0]
# d_seqlen = d_shape[1]
# pad_idx = L.where(
# L.cast(L.reshape(input_mask, [d_batch, d_seqlen]), 'bool'))
# attn_bias = L.matmul(
# input_mask, input_mask, transpose_y=True) # [batch, seq, seq]
# attn_bias = (1. - attn_bias) * -10000.
# attn_bias = L.stack([attn_bias] * n_head, 1)
# if attn_bias.dtype != enc_input.dtype:
# attn_bias = L.cast(attn_bias, enc_input.dtype)
def to_2d(t_3d):
t_2d = L.gather_nd(t_3d, pad_idx)
return t_2d
......
......@@ -27,7 +27,7 @@ class ErnieSageV2(BaseNet):
src_position_ids = L.expand(src_position_ids, [src_batch, 1, 1]) # [B, slot_seqlen * num_b, 1]
zero = L.fill_constant([1], dtype='int64', value=0)
input_mask = L.cast(L.equal(src_ids, zero), "int32") # assume pad id == 0 [B, slot_seqlen, 1]
src_pad_len = L.reduce_sum(input_mask, 1) # [B, 1, 1]
src_pad_len = L.reduce_sum(input_mask, 1, keep_dim=True) # [B, 1, 1]
dst_position_ids = L.reshape(
L.range(
......@@ -81,14 +81,16 @@ class ErnieSageV2(BaseNet):
self_feature = L.fc(self_feature,
hidden_size,
act=act,
param_attr=F.ParamAttr(name=name + "_l",
param_attr=F.ParamAttr(name=name + "_l.w_0",
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = L.fc(neigh_feature,
hidden_size,
act=act,
param_attr=F.ParamAttr(name=name + "_r",
learning_rate=learning_rate),
param_attr=F.ParamAttr(name=name + "_r.w_0",
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = L.concat([self_feature, neigh_feature], axis=1)
output = L.l2_normalize(output, axis=1)
......
......@@ -24,7 +24,6 @@ from models.message_passing import copy_send
class ErnieSageV3(BaseNet):
def __init__(self, config):
super(ErnieSageV3, self).__init__(config)
self.config.layer_type = "ernie_recv_sum"
def build_inputs(self):
inputs = super(ErnieSageV3, self).build_inputs()
......@@ -35,11 +34,10 @@ class ErnieSageV3(BaseNet):
def gnn_layer(self, gw, feature, hidden_size, act, initializer, learning_rate, name):
def ernie_recv(feat):
"""doc"""
# TODO maxlen 400
#pad_value = L.cast(L.assign(input=np.array([0], dtype=np.int32)), "int64")
num_neighbor = self.config.samples[0]
pad_value = L.zeros([1], "int64")
out, _ = L.sequence_pad(feat, pad_value=pad_value, maxlen=10)
out = L.reshape(out, [0, 400])
out, _ = L.sequence_pad(feat, pad_value=pad_value, maxlen=num_neighbor)
out = L.reshape(out, [0, self.config.max_seqlen*num_neighbor])
return out
def erniesage_v3_aggregator(gw, feature, hidden_size, act, initializer, learning_rate, name):
......@@ -73,7 +71,7 @@ class ErnieSageV3(BaseNet):
act,
initializer,
learning_rate=fc_lr,
name="%s_%s" % (self.config.layer_type, i))
name="%s_%s" % ("erniesage_v3", i))
features.append(feature)
return features
......@@ -85,17 +83,16 @@ class ErnieSageV3(BaseNet):
ernie = ErnieGraphModel(
src_ids=feat,
config=ernie_config,
slot_seqlen=self.config.max_seqlen,
name="student_")
slot_seqlen=self.config.max_seqlen)
feat = ernie.get_pooled_output()
fc_lr = self.config.lr / 0.001
feat= L.fc(feat,
self.config.hidden_size,
act="relu",
param_attr=F.ParamAttr(name=name + "_l",
learning_rate=fc_lr),
)
feat = L.l2_normalize(feat, axis=1)
# feat = L.fc(feat,
# self.config.hidden_size,
# act="relu",
# param_attr=F.ParamAttr(name=name + "_l",
# learning_rate=fc_lr),
# )
#feat = L.l2_normalize(feat, axis=1)
if self.config.final_fc:
feat = L.fc(feat,
......
......@@ -57,14 +57,16 @@ def graphsage_sum(gw, feature, hidden_size, act, initializer, learning_rate, nam
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
......@@ -79,14 +81,16 @@ def graphsage_mean(gw, feature, hidden_size, act, initializer, learning_rate, na
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
......@@ -101,14 +105,16 @@ def pinsage_mean(gw, feature, hidden_size, act, initializer, learning_rate, name
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
......@@ -123,14 +129,16 @@ def pinsage_sum(gw, feature, hidden_size, act, initializer, learning_rate, name)
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r", initializer=initializer,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
......
......@@ -36,7 +36,7 @@ from tokenization import FullTokenizer
def term2id(string, tokenizer, max_seqlen):
string = string.split("\t")[1]
#string = string.split("\t")[1]
tokens = tokenizer.tokenize(string)
ids = tokenizer.convert_tokens_to_ids(tokens)
ids = ids[:max_seqlen-1]
......@@ -99,19 +99,13 @@ def dump_graph(args):
np.save(os.path.join(args.outpath, "neg_samples.npy"), np.array(neg_samples))
log.info("End Build Graph")
def dump_id2str_map(args):
log.info("Dump id2str map starting...")
id2str = np.array([line.strip("\n") for line in open(os.path.join(args.outpath, "terms.txt"), "r", encoding=args.encoding)])
np.save(os.path.join(args.outpath, "id2str.npy"), id2str)
log.info("Dump id2str map done.")
def dump_node_feat(args):
log.info("Dump node feat starting...")
id2str = np.load(os.path.join(args.outpath, "id2str.npy"), mmap_mode="r")
id2str = [line.strip("\n").split("\t")[1] for line in io.open(os.path.join(args.outpath, "terms.txt"), encoding=args.encoding)]
pool = multiprocessing.Pool()
tokenizer = FullTokenizer(args.vocab_file)
term_ids = pool.map(partial(term2id, tokenizer=tokenizer, max_seqlen=args.max_seqlen), id2str)
np.save(os.path.join(args.outpath, "term_ids.npy"), np.array(term_ids))
np.save(os.path.join(args.outpath, "term_ids.npy"), np.array(term_ids, np.uint16))
log.info("Dump node feat done.")
pool.terminate()
......@@ -124,5 +118,4 @@ if __name__ == "__main__":
parser.add_argument("-o", "--outpath", type=str, default=None)
args = parser.parse_args()
dump_graph(args)
dump_id2str_map(args)
dump_node_feat(args)
......@@ -32,8 +32,9 @@ class TrainData(object):
trainer_count = int(os.getenv("PADDLE_TRAINERS_NUM", "1"))
log.info("trainer_id: %s, trainer_count: %s." % (trainer_id, trainer_count))
edges = np.load(os.path.join(graph_path, "edges.npy"), allow_pickle=True)
bidirectional_edges = np.load(os.path.join(graph_path, "edges.npy"), allow_pickle=True)
# edges is bidirectional.
edges = bidirectional_edges[0::2]
train_usr = edges[trainer_id::trainer_count, 0]
train_ad = edges[trainer_id::trainer_count, 1]
returns = {
......@@ -73,7 +74,8 @@ def main(config):
use_pyreader=config.use_pyreader,
phase="train",
graph_data_path=config.graph_path,
shuffle=True)
shuffle=True,
neg_type=config.neg_type)
log.info("build graph reader done.")
......
......@@ -23,7 +23,7 @@ You can make your customized dataset by the following format:
For examples, use gpu to train STGCN on your dataset.
```
python main.py --use_cuda --input_file dataset/input_csv --label_file dataset/output.csv --adj_mat_file dataset/W.csv --city_file dataset/city.csv
python main.py --use_cuda --input_file dataset/input.csv --label_file dataset/output.csv --adj_mat_file dataset/W.csv --city_file dataset/city.csv
```
#### Hyperparameters
......
......@@ -167,9 +167,6 @@ def data_gen_mydata(input_file, label_file, n, n_his, n_pred, n_config):
x = x.drop(columns=['date'])
y = y.drop(columns=['date'])
x = x.drop(columns=['武汉'])
y = y.drop(columns=['武汉'])
# param
n_val, n_test = n_config
n_train = len(y) - n_val - n_test - 2
......
0,3409,2025,509,13098
2404,0,2207,3654,9485
21926,18619,0,955,1308
20160,12493,170,0,1906
611,572,1204,1066,0
num,city
0,A
1,B
2,C
3,D
4,E
date,A,B,C,D,E
2327/1/1,178,3907,2907,1170,832
2327/1/2,220,2720,2548,1370,1039
2327/1/3,222,5065,4286,2051,1582
2327/1/4,183,5291,4626,2096,1614
2327/1/5,172,3916,3538,1726,1349
2327/1/6,219,4079,4110,2044,1701
2327/1/7,220,4707,4673,2589,2177
2327/1/8,222,5306,5512,3015,2463
2327/1/9,215,5762,5802,3184,2558
2327/1/10,217,4977,4641,2659,2185
2327/1/11,186,6849,6106,3092,2310
2327/1/12,175,5953,4986,2521,1769
2327/1/13,215,5270,4983,2559,1818
2327/1/14,213,5304,5307,2516,1707
2327/1/15,205,5499,5684,2659,1633
2327/1/16,205,5811,6531,2920,1793
2327/1/17,222,6397,7745,3159,2036
2327/1/18,253,7759,9681,4011,2331
2327/1/19,859,8791,8215,4507,2480
2327/1/20,837,10348,9960,5655,3167
2327/1/21,931,12782,13621,7107,4291
2327/1/22,1048,15298,16222,8206,4730
2327/1/23,835,16287,14803,6504,3679
2327/1/24,635,4806,3970,1551,816
2327/1/25,511,1028,1023,401,205
2327/1/26,387,483,632,249,111
2327/1/27,459,457,591,209,126
2327/1/28,1073,513,707,234,176
2327/1/29,1301,651,932,276,264
2327/1/30,1502,757,1266,369,302
2327/1/31,1823,972,1286,490,487
2327/2/1,2219,1113,1594,579,548
2327/2/2,2719,1345,2172,695,703
2327/2/3,3563,1556,2517,931,823
2327/2/4,4335,1824,2837,1095,928
2327/2/5,5568,2343,3323,1244,1043
2327/2/6,6070,2917,3420,1295,1054
2327/2/7,7169,3278,3758,1516,1185
2327/2/8,8284,3616,3982,1639,1333
2327/2/9,9229,3799,4200,1726,1418
2327/2/10,10425,3876,4334,1750,1449
2327/2/11,11213,3920,4522,1818,1484
2327/2/12,11653,4106,4831,1881,1512
2327/2/13,20427,4343,5413,2537,1570
2327/2/14,24164,4666,5914,2636,1607
2327/2/15,22608,4901,5546,2812,1557
date,A,B,C,D,E
2327/1/24,70,22,0,2,0
2327/1/25,77,4,52,2,1
2327/1/26,46,29,58,23,7
2327/1/27,80,45,32,14,28
2327/1/28,892,73,59,24,34
2327/1/29,315,101,111,30,61
2327/1/30,356,125,172,50,32
2327/1/31,378,142,77,70,123
2327/2/1,576,87,153,66,61
2327/2/2,894,121,276,46,94
2327/2/3,1033,169,244,166,107
2327/2/4,1242,202,176,114,84
2327/2/5,1967,342,223,100,103
2327/2/6,1766,424,162,88,52
2327/2/7,1501,255,90,84,51
2327/2/8,1985,172,144,56,69
2327/2/9,1379,123,100,56,81
2327/2/10,1920,105,111,48,31
2327/2/11,1552,101,80,30,44
2327/2/12,1104,109,66,35,25
2327/2/13,13436,123,264,321,13
2327/2/14,2997,135,129,16,10
2327/2/15,1923,105,26,31,17
2327/2/16,1548,87,6,12,16
......@@ -124,7 +124,7 @@ def main(args):
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument('--n_route', type=int, default=74)
parser.add_argument('--n_route', type=int, default=5)
parser.add_argument('--n_his', type=int, default=23)
parser.add_argument('--n_pred', type=int, default=3)
parser.add_argument('--batch_size', type=int, default=10)
......
......@@ -19,10 +19,10 @@ import os
from datetime import datetime
import logging
from collections import defaultdict
from tensorboardX import SummaryWriter
import paddle.fluid as F
from pgl.utils.logger import log
from pgl.utils.log_writer import LogWriter
def multi_device(reader, dev_count):
......@@ -79,10 +79,10 @@ def train_and_evaluate(exe,
global_step = 0
timestamp = datetime.now().strftime("%Hh%Mm%Ss")
log_path = os.path.join(args.log_dir, "tensorboard_log_%s" % timestamp)
log_path = os.path.join(args.log_dir, "log_%s" % timestamp)
_create_if_not_exist(log_path)
writer = SummaryWriter(log_path)
writer = LogWriter(log_path)
best_valid_score = 0.0
for e in range(args.epoch):
......@@ -99,7 +99,7 @@ def train_and_evaluate(exe,
ret = model.metrics.parse(ret)
if global_step % args.train_log_step == 0:
writer.add_scalar(
"batch_loss", ret['loss'], global_step=global_step)
"batch_loss", ret['loss'], global_step)
log.info("epoch: %d | step: %d | loss: %.4f " %
(e, global_step, ret['loss']))
......@@ -111,7 +111,7 @@ def train_and_evaluate(exe,
for key, value in valid_ret.items():
message += "%s %.4f | " % (key, value)
writer.add_scalar(
"eval_%s" % key, value, global_step=global_step)
"eval_%s" % key, value, global_step)
log.info(message)
# testing
......@@ -120,7 +120,7 @@ def train_and_evaluate(exe,
for key, value in test_ret.items():
message += "%s %.4f | " % (key, value)
writer.add_scalar(
"test_%s" % key, value, global_step=global_step)
"test_%s" % key, value, global_step)
log.info(message)
# evaluate after one epoch
......@@ -128,7 +128,7 @@ def train_and_evaluate(exe,
message = "epoch %s valid: " % e
for key, value in valid_ret.items():
message += "%s %.4f | " % (key, value)
writer.add_scalar("eval_%s" % key, value, global_step=global_step)
writer.add_scalar("eval_%s" % key, value, global_step)
log.info(message)
# testing
......@@ -136,7 +136,7 @@ def train_and_evaluate(exe,
message = "epoch %s test: " % e
for key, value in test_ret.items():
message += "%s %.4f | " % (key, value)
writer.add_scalar("test_%s" % key, value, global_step=global_step)
writer.add_scalar("test_%s" % key, value, global_step)
log.info(message)
message = "epoch %s best %s result | " % (e, args.eval_metrics)
......
......@@ -18,7 +18,7 @@ import numpy as np
import sys
import os
import paddle.fluid as F
from tensorboardX import SummaryWriter
from pgl.utils.log_writer import LogWriter
from ogb.linkproppred import Evaluator
from ogb.linkproppred import LinkPropPredDataset
......@@ -115,7 +115,7 @@ def train_and_evaluate(exe,
log_path = os.path.join(output_path, "log")
_create_if_not_exist(log_path)
writer = SummaryWriter(log_path)
writer = LogWriter(log_path)
best_model = 0
for e in range(epoch):
......@@ -134,7 +134,7 @@ def train_and_evaluate(exe,
if global_step % train_log_step == 0:
for key, value in ret.items():
writer.add_scalar(
'train_' + key, value, global_step=global_step)
'train_' + key, value, global_step)
global_step += 1
if global_step % eval_step == 0:
......@@ -149,7 +149,7 @@ def train_and_evaluate(exe,
sys.stderr.write(json.dumps(eval_ret, indent=4) + "\n")
for key, value in eval_ret.items():
writer.add_scalar(key, value, global_step=global_step)
writer.add_scalar(key, value, global_step)
if eval_ret["valid_hits@100"] > best_model:
F.io.save_persistables(
......@@ -170,7 +170,7 @@ def train_and_evaluate(exe,
sys.stderr.write(json.dumps(eval_ret, indent=4) + "\n")
for key, value in eval_ret.items():
writer.add_scalar(key, value, global_step=global_step)
writer.add_scalar(key, value, global_step)
if eval_ret["valid_hits@100"] > best_model:
F.io.save_persistables(exe,
......
# Graph Node Prediction for Open Graph Benchmark (OGB) Arxiv dataset
[The Open Graph Benchmark (OGB)](https://ogb.stanford.edu/) is a collection of benchmark datasets, data loaders, and evaluators for graph machine learning. Here we complete the Graph Node Prediction task based on PGL.
### Requirements
paddlpaddle >= 1.7.1
pgl 1.0.2
ogb 1.1.1
### How to Run
```
CUDA_VISIBLE_DEVICES=0 python train.py \
--use_cuda 1 \
--num_workers 4 \
--output_path ./output/model_1 \
--batch_size 1024 \
--test_batch_size 512 \
--epoch 100 \
--learning_rate 0.001 \
--full_batch 0 \
--model gaan \
--drop_rate 0.5 \
--samples 8 8 8 \
--test_samples 20 20 20 \
--hidden_size 256
```
or
```
sh run.sh
```
The best record will be saved in ./output/model_1/best.txt.
### Hyperparameters
- use_cuda: whether to use gpu or not
- num_workers: the nums of sample workers
- output_path: path to save the model
- batch_size: batch size
- epoch: number of training epochs
- learning_rate: learning rate
- full_batch: run full batch of graph
- model: model to run, now gaan, sage, gcn, eta are available
- drop_rate: drop rate of the feature layers
- samples: the sample nums of each GNN layers
- hidden_size: the hidden size
### Performance
We train our models for 100 epochs and report the **acc** on the test dataset.
|dataset|mean|std|#experiments|
|-|-|-|-|
|ogbn-arxiv|0.7197|0.0024|16|
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""finetune args"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import os
import time
import argparse
from utils.args import ArgumentGroup
# yapf: disable
parser = argparse.ArgumentParser(__doc__)
model_g = ArgumentGroup(parser, "model", "model configuration and paths.")
model_g.add_arg("init_checkpoint", str, None, "Init checkpoint to resume training from.")
model_g.add_arg("init_pretraining_params", str, None,
"Init pre-training params which preforms fine-tuning from. If the "
"arg 'init_checkpoint' has been set, this argument wouldn't be valid.")
train_g = ArgumentGroup(parser, "training", "training options.")
train_g.add_arg("epoch", int, 3, "Number of epoches for fine-tuning.")
train_g.add_arg("learning_rate", float, 5e-5, "Learning rate used to train with warmup.")
run_type_g = ArgumentGroup(parser, "run_type", "running type options.")
run_type_g.add_arg("use_cuda", bool, True, "If set, use GPU for training.")
run_type_g.add_arg("num_workers", int, 4, "use multiprocess to generate graph")
run_type_g.add_arg("output_path", str, None, "path to save model")
run_type_g.add_arg("model", str, None, "model to run")
run_type_g.add_arg("hidden_size", int, 256, "model hidden-size")
run_type_g.add_arg("drop_rate", float, 0.5, "Dropout rate")
run_type_g.add_arg("batch_size", int, 1024, "batch_size")
run_type_g.add_arg("full_batch", bool, False, "use static graph wrapper, if full_batch is true, batch_size will take no effect.")
run_type_g.add_arg("samples", type=int, nargs='+', default=[30, 30], help="sample nums of k-hop.")
run_type_g.add_arg("test_batch_size", int, 512, help="sample nums of k-hop of test phase.")
run_type_g.add_arg("test_samples", type=int, nargs='+', default=[30, 30], help="sample nums of k-hop.")
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Base DataLoader
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import os
import sys
import six
from io import open
from collections import namedtuple
import numpy as np
import tqdm
import paddle
from pgl.utils import mp_reader
import collections
import time
import pgl
if six.PY3:
import io
sys.stdout = io.TextIOWrapper(sys.stdout.buffer, encoding='utf-8')
sys.stderr = io.TextIOWrapper(sys.stderr.buffer, encoding='utf-8')
def batch_iter(data, perm, batch_size, fid, num_workers):
"""node_batch_iter
"""
size = len(data)
start = 0
cc = 0
while start < size:
index = perm[start:start + batch_size]
start += batch_size
cc += 1
if cc % num_workers != fid:
continue
yield data[index]
def scan_batch_iter(data, batch_size, fid, num_workers):
"""node_batch_iter
"""
batch = []
cc = 0
for line_example in data.scan():
cc += 1
if cc % num_workers != fid:
continue
batch.append(line_example)
if len(batch) == batch_size:
yield batch
batch = []
if len(batch) > 0:
yield batch
class BaseDataGenerator(object):
"""Base Data Geneartor"""
def __init__(self, buf_size, batch_size, num_workers, shuffle=True):
self.num_workers = num_workers
self.batch_size = batch_size
self.line_examples = []
self.buf_size = buf_size
self.shuffle = shuffle
def batch_fn(self, batch_examples):
""" batch_fn batch producer"""
raise NotImplementedError("No defined Batch Fn")
def batch_iter(self, fid, perm):
""" batch iterator"""
if self.shuffle:
for batch in batch_iter(self, perm, self.batch_size, fid,
self.num_workers):
yield batch
else:
for batch in scan_batch_iter(self, self.batch_size, fid,
self.num_workers):
yield batch
def __len__(self):
return len(self.line_examples)
def __getitem__(self, idx):
if isinstance(idx, collections.Iterable):
return [self[bidx] for bidx in idx]
else:
return self.line_examples[idx]
def generator(self):
"""batch dict generator"""
def worker(filter_id, perm):
""" multiprocess worker"""
def func_run():
""" func_run """
pid = os.getpid()
np.random.seed(pid + int(time.time()))
for batch_examples in self.batch_iter(filter_id, perm):
batch_dict = self.batch_fn(batch_examples)
yield batch_dict
return func_run
# consume a seed
np.random.rand()
if self.shuffle:
perm = np.arange(0, len(self))
np.random.shuffle(perm)
else:
perm = None
if self.num_workers == 1:
r = paddle.reader.buffered(worker(0, perm), self.buf_size)
else:
worker_pool = [
worker(wid, perm) for wid in range(self.num_workers)
]
worker = mp_reader.multiprocess_reader(
worker_pool, use_pipe=True, queue_size=1000)
r = paddle.reader.buffered(worker, self.buf_size)
for batch in r():
yield batch
def scan(self):
for line_example in self.line_examples:
yield line_example
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
from dataloader.base_dataloader import BaseDataGenerator
from utils.to_undirected import to_undirected
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
from pgl.contrib.ogb.nodeproppred.dataset_pgl import PglNodePropPredDataset
from pgl.sample import graph_saint_random_walk_sample
from ogb.nodeproppred import Evaluator
import tqdm
from collections import namedtuple
import pgl
import numpy as np
import copy
def traverse(item):
"""traverse
"""
if isinstance(item, list) or isinstance(item, np.ndarray):
for i in iter(item):
for j in traverse(i):
yield j
else:
yield item
def flat_node_and_edge(nodes):
"""flat_node_and_edge
"""
nodes = list(set(traverse(nodes)))
return nodes
def k_hop_sampler(graph, samples, batch_nodes):
# for batch_train_samples, batch_train_labels in batch_info:
start_nodes = copy.deepcopy(batch_nodes)
nodes = start_nodes
edges = []
for max_deg in samples:
pred_nodes = graph.sample_predecessor(start_nodes, max_degree=max_deg)
for dst_node, src_nodes in zip(start_nodes, pred_nodes):
for src_node in src_nodes:
edges.append((src_node, dst_node))
last_nodes = nodes
nodes = [nodes, pred_nodes]
nodes = flat_node_and_edge(nodes)
# Find new nodes
start_nodes = list(set(nodes) - set(last_nodes))
if len(start_nodes) == 0:
break
subgraph = graph.subgraph(
nodes=nodes, edges=edges, with_node_feat=True, with_edge_feat=True)
sub_node_index = subgraph.reindex_from_parrent_nodes(batch_nodes)
return subgraph, sub_node_index
def graph_saint_randomwalk_sampler(graph, batch_nodes, max_depth=3):
subgraph = graph_saint_random_walk_sample(graph, batch_nodes, max_depth)
sub_node_index = subgraph.reindex_from_parrent_nodes(batch_nodes)
return subgraph, sub_node_index
class ArxivDataGenerator(BaseDataGenerator):
def __init__(self,
graph_wrapper=None,
buf_size=1000,
batch_size=128,
num_workers=1,
samples=[30, 30],
shuffle=True,
phase="train"):
super(ArxivDataGenerator, self).__init__(
buf_size=buf_size,
num_workers=num_workers,
batch_size=batch_size,
shuffle=shuffle)
self.samples = samples
self.d_name = "ogbn-arxiv"
self.graph_wrapper = graph_wrapper
dataset = PglNodePropPredDataset(name=self.d_name)
splitted_idx = dataset.get_idx_split()
self.phase = phase
graph, label = dataset[0]
graph = to_undirected(graph)
self.graph = graph
self.num_nodes = graph.num_nodes
if self.phase == 'train':
nodes_idx = splitted_idx["train"]
labels = label[nodes_idx]
elif self.phase == "valid":
nodes_idx = splitted_idx["valid"]
labels = label[nodes_idx]
elif self.phase == "test":
nodes_idx = splitted_idx["test"]
labels = label[nodes_idx]
self.nodes_idx = nodes_idx
self.labels = labels
self.sample_based_line_example(nodes_idx, labels)
def sample_based_line_example(self, nodes_idx, labels):
self.line_examples = []
Example = namedtuple('Example', ["node", "label"])
for node, label in zip(nodes_idx, labels):
self.line_examples.append(Example(node=node, label=label))
print("Phase", self.phase)
print("Len Examples", len(self.line_examples))
def batch_fn(self, batch_ex):
batch_nodes = []
cc = 0
batch_node_id = []
batch_labels = []
for ex in batch_ex:
batch_nodes.append(ex.node)
batch_labels.append(ex.label)
_graph_wrapper = copy.copy(self.graph_wrapper)
#if self.phase == "train":
# subgraph, sub_node_index = graph_saint_randomwalk_sampler(self.graph, batch_nodes)
#else:
subgraph, sub_node_index = k_hop_sampler(self.graph, self.samples,
batch_nodes)
feed_dict = _graph_wrapper.to_feed(subgraph)
feed_dict["batch_nodes"] = sub_node_index
feed_dict["labels"] = np.array(batch_labels, dtype="int64")
return feed_dict
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# encoding=utf-8
"""lbs_model"""
import os
import re
import time
from random import random
from functools import reduce, partial
import numpy as np
import multiprocessing
import paddle
import paddle.fluid as F
import paddle.fluid as fluid
import paddle.fluid.layers as L
from pgl.graph_wrapper import GraphWrapper
from pgl.layers.conv import gcn, gat
from pgl.utils import paddle_helper
class BaseGraph(object):
"""Base Graph Model"""
def __init__(self, args, graph_wrapper=None):
self.hidden_size = args.hidden_size
self.num_nodes = args.num_nodes
self.drop_rate = args.drop_rate
node_feature = [('feat', [None, 128], "float32")]
if graph_wrapper is None:
self.graph_wrapper = GraphWrapper(
name="graph", place=F.CPUPlace(), node_feat=node_feature)
else:
self.graph_wrapper = graph_wrapper
self.build_model(args)
def build_model(self, args):
""" build graph model"""
self.batch_nodes = L.data(
name="batch_nodes", shape=[-1], dtype="int64")
self.labels = L.data(name="labels", shape=[-1], dtype="int64")
self.batch_nodes = L.reshape(self.batch_nodes, [-1, 1])
self.labels = L.reshape(self.labels, [-1, 1])
self.batch_nodes.stop_gradients = True
self.labels.stop_gradients = True
feat = self.graph_wrapper.node_feat['feat']
if self.graph_wrapper is not None:
feat = self.neighbor_aggregator(feat)
assert feat is not None
feat = L.gather(feat, self.batch_nodes)
self.logits = L.fc(feat,
size=40,
act=None,
name="node_predictor_logits")
self.loss()
def mlp(self, feat):
for i in range(3):
feat = L.fc(node,
size=self.hidden_size,
name="simple_mlp_{}".format(i))
feat = L.batch_norm(feat)
feat = L.relu(feat)
feat = L.dropout(feat, dropout_prob=0.5)
return feat
def loss(self):
self.loss = L.softmax_with_cross_entropy(self.logits, self.labels)
self.loss = L.reduce_mean(self.loss)
self.metrics = {"loss": self.loss, }
def neighbor_aggregator(self, feature):
"""neighbor aggregation"""
raise NotImplementedError(
"Please implement this method when you using graph wrapper for GNNs."
)
class MLPModel(BaseGraph):
def __init__(self, args, gw):
super(MLPModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
for i in range(3):
feature = L.fc(feature,
size=self.hidden_size,
name="simple_mlp_{}".format(i))
#feature = L.batch_norm(feature)
feature = L.relu(feature)
feature = L.dropout(feature, dropout_prob=self.drop_rate)
return feature
class SAGEModel(BaseGraph):
def __init__(self, args, gw):
super(SAGEModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
sage = GraphSageModel(40, 3, 256)
feature = sage.forward(self.graph_wrapper, feature, self.drop_rate)
return feature
class GAANModel(BaseGraph):
def __init__(self, args, gw):
super(GAANModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
gaan = GaANModel(
40,
3,
hidden_size_a=48,
hidden_size_v=64,
hidden_size_m=128,
hidden_size_o=256)
feature = gaan.forward(self.graph_wrapper, feature, self.drop_rate)
return feature
class GINModel(BaseGraph):
def __init__(self, args, gw):
super(GINModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
gin = GinModel(40, 2, 256)
feature = gin.forward(self.graph_wrapper, feature, self.drop_rate)
return feature
class GATModel(BaseGraph):
def __init__(self, args, gw):
super(GATModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
feature = gat(self.graph_wrapper,
feature,
hidden_size=self.hidden_size,
activation='relu',
name="GAT_1")
feature = gat(self.graph_wrapper,
feature,
hidden_size=self.hidden_size,
activation='relu',
name="GAT_2")
return feature
class GCNModel(BaseGraph):
def __init__(self, args, gw):
super(GCNModel, self).__init__(args, gw)
def neighbor_aggregator(self, feature):
feature = gcn(
self.graph_wrapper,
feature,
hidden_size=self.hidden_size,
activation='relu',
name="GCN_1", )
feature = fluid.layers.dropout(feature, dropout_prob=self.drop_rate)
feature = gcn(self.graph_wrapper,
feature,
hidden_size=self.hidden_size,
activation='relu',
name="GCN_2")
feature = fluid.layers.dropout(feature, dropout_prob=self.drop_rate)
return feature
class GinModel(object):
def __init__(self,
num_class,
num_layers,
hidden_size,
act='relu',
name="GINModel"):
self.num_class = num_class
self.num_layers = num_layers
self.hidden_size = hidden_size
self.act = act
self.name = name
def forward(self, gw, feature):
for i in range(self.num_layers):
feature = gin(gw, feature, self.hidden_size, self.act,
self.name + '_' + str(i))
feature = fluid.layers.layer_norm(
feature,
begin_norm_axis=1,
param_attr=fluid.ParamAttr(
name="norm_scale_%s" % (i),
initializer=fluid.initializer.Constant(1.0)),
bias_attr=fluid.ParamAttr(
name="norm_bias_%s" % (i),
initializer=fluid.initializer.Constant(0.0)), )
feature = fluid.layers.relu(feature)
return feature
class GaANModel(object):
def __init__(self,
num_class,
num_layers,
hidden_size_a=24,
hidden_size_v=32,
hidden_size_m=64,
hidden_size_o=128,
heads=8,
act='relu',
name="GaAN"):
self.num_class = num_class
self.num_layers = num_layers
self.hidden_size_a = hidden_size_a
self.hidden_size_v = hidden_size_v
self.hidden_size_m = hidden_size_m
self.hidden_size_o = hidden_size_o
self.act = act
self.name = name
self.heads = heads
def GaANConv(self, gw, feature, name):
feat_key = fluid.layers.fc(
feature,
self.hidden_size_a * self.heads,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_key'))
# N * (D2 * M)
feat_value = fluid.layers.fc(
feature,
self.hidden_size_v * self.heads,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_value'))
# N * (D1 * M)
feat_query = fluid.layers.fc(
feature,
self.hidden_size_a * self.heads,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_query'))
# N * Dm
feat_gate = fluid.layers.fc(
feature,
self.hidden_size_m,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_gate'))
# send
message = gw.send(
self.send_func,
nfeat_list=[('node_feat', feature), ('feat_key', feat_key),
('feat_value', feat_value), ('feat_query', feat_query),
('feat_gate', feat_gate)],
efeat_list=None, )
# recv
output = gw.recv(message, self.recv_func)
output = fluid.layers.fc(
output,
self.hidden_size_o,
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_output'))
output = fluid.layers.leaky_relu(output, alpha=0.1)
output = fluid.layers.dropout(output, dropout_prob=0.1)
return output
def forward(self, gw, feature, drop_rate):
for i in range(self.num_layers):
feature = self.GaANConv(gw, feature, self.name + '_' + str(i))
feature = fluid.layers.dropout(feature, dropout_prob=drop_rate)
return feature
def send_func(self, src_feat, dst_feat, edge_feat):
# E * (M * D1)
feat_query, feat_key = dst_feat['feat_query'], src_feat['feat_key']
# E * M * D1
old = feat_query
feat_query = fluid.layers.reshape(
feat_query, [-1, self.heads, self.hidden_size_a])
feat_key = fluid.layers.reshape(feat_key,
[-1, self.heads, self.hidden_size_a])
# E * M
alpha = fluid.layers.reduce_sum(feat_key * feat_query, dim=-1)
return {
'dst_node_feat': dst_feat['node_feat'],
'src_node_feat': src_feat['node_feat'],
'feat_value': src_feat['feat_value'],
'alpha': alpha,
'feat_gate': src_feat['feat_gate']
}
def recv_func(self, message):
dst_feat = message['dst_node_feat']
src_feat = message['src_node_feat']
x = fluid.layers.sequence_pool(dst_feat, 'average')
z = fluid.layers.sequence_pool(src_feat, 'average')
feat_gate = message['feat_gate']
g_max = fluid.layers.sequence_pool(feat_gate, 'max')
g = fluid.layers.concat([x, g_max, z], axis=1)
g = fluid.layers.fc(g, self.heads, bias_attr=False, act="sigmoid")
# softmax
alpha = message['alpha']
alpha = paddle_helper.sequence_softmax(alpha) # E * M
feat_value = message['feat_value'] # E * (M * D2)
old = feat_value
feat_value = fluid.layers.reshape(
feat_value, [-1, self.heads, self.hidden_size_v]) # E * M * D2
feat_value = fluid.layers.elementwise_mul(feat_value, alpha, axis=0)
feat_value = fluid.layers.reshape(
feat_value, [-1, self.heads * self.hidden_size_v]) # E * (M * D2)
feat_value = fluid.layers.lod_reset(feat_value, old)
feat_value = fluid.layers.sequence_pool(feat_value,
'sum') # N * (M * D2)
feat_value = fluid.layers.reshape(
feat_value, [-1, self.heads, self.hidden_size_v]) # N * M * D2
output = fluid.layers.elementwise_mul(feat_value, g, axis=0)
output = fluid.layers.reshape(
output, [-1, self.heads * self.hidden_size_v]) # N * (M * D2)
output = fluid.layers.concat([x, output], axis=1)
return output
class GraphSageModel(object):
def __init__(self,
num_class,
num_layers,
hidden_size,
act='relu',
name="GraphSage"):
self.num_class = num_class
self.num_layers = num_layers
self.hidden_size = hidden_size
self.act = act
self.name = name
def GraphSageConv(self, gw, feature, name):
message = gw.send(
self.send_func,
nfeat_list=[('node_feat', feature)],
efeat_list=None, )
neighbor_feat = gw.recv(message, self.recv_func)
neighbor_feat = fluid.layers.fc(neighbor_feat,
self.hidden_size,
act=self.act,
name=name + '_n')
self_feature = fluid.layers.fc(feature,
self.hidden_size,
act=self.act,
name=name + '_s')
output = self_feature + neighbor_feat
output = fluid.layers.l2_normalize(output, axis=1)
return output
def SageConv(self, gw, feature, name, hidden_size, act):
message = gw.send(
self.send_func,
nfeat_list=[('node_feat', feature)],
efeat_list=None, )
neighbor_feat = gw.recv(message, self.recv_func)
neighbor_feat = fluid.layers.fc(neighbor_feat,
hidden_size,
act=None,
name=name + '_n')
self_feature = fluid.layers.fc(feature,
hidden_size,
act=None,
name=name + '_s')
output = self_feature + neighbor_feat
# output = fluid.layers.concat([self_feature, neighbor_feat], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
if act is not None:
ouput = L.relu(output)
return output
def bn_drop(self, feat, drop_rate):
#feat = L.batch_norm(feat)
feat = L.dropout(feat, dropout_prob=drop_rate)
return feat
def forward(self, gw, feature, drop_rate):
for i in range(self.num_layers):
final = (i == (self.num_layers - 1))
feature = self.SageConv(gw, feature, self.name + '_' + str(i),
self.hidden_size, None
if final else self.act)
if not final:
feature = self.bn_drop(feature, drop_rate)
return feature
def send_func(self, src_feat, dst_feat, edge_feat):
return src_feat["node_feat"]
def recv_func(self, feat):
return fluid.layers.sequence_pool(feat, pool_type="average")
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""init"""
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""train and evaluate"""
import tqdm
import json
import numpy as np
import sys
import os
import paddle.fluid as F
from tensorboardX import SummaryWriter
from ogb.nodeproppred import Evaluator
from ogb.nodeproppred import NodePropPredDataset
def multi_device(reader, dev_count):
"""multi device"""
if dev_count == 1:
for batch in reader:
yield batch
else:
batches = []
for batch in reader:
batches.append(batch)
if len(batches) == dev_count:
yield batches
batches = []
class OgbEvaluator(object):
def __init__(self):
d_name = "ogbn-arxiv"
dataset = NodePropPredDataset(name=d_name)
graph, label = dataset[0]
self.num_nodes = graph["num_nodes"]
self.ogb_evaluator = Evaluator(name="ogbn-arxiv")
def eval(self, scores, labels, phase):
pred = (np.argmax(scores, axis=1)).reshape([-1, 1])
ret = {}
ret['%s_acc' % (phase)] = self.ogb_evaluator.eval({
'y_true': labels,
'y_pred': pred,
})['acc']
return ret
def evaluate(model, valid_exe, valid_ds, valid_prog, dev_count, evaluator,
phase, full_batch):
"""evaluate """
cc = 0
scores = []
labels = []
if full_batch:
valid_iter = _full_batch_wapper(valid_ds)
else:
valid_iter = valid_ds.generator
for feed_dict in tqdm.tqdm(
multi_device(valid_iter(), dev_count), desc='evaluating'):
if dev_count > 1:
output = valid_exe.run(feed=feed_dict,
fetch_list=[model.logits, model.labels])
else:
output = valid_exe.run(valid_prog,
feed=feed_dict,
fetch_list=[model.logits, model.labels])
scores.append(output[0])
labels.append(output[1])
scores = np.vstack(scores)
labels = np.vstack(labels)
ret = evaluator.eval(scores, labels, phase)
return ret
def _create_if_not_exist(path):
basedir = os.path.dirname(path)
if not os.path.exists(basedir):
os.makedirs(basedir)
def _full_batch_wapper(ds):
feed_dict = {}
feed_dict["batch_nodes"] = np.array(ds.nodes_idx, dtype="int64")
feed_dict["labels"] = np.array(ds.labels, dtype="int64")
def r():
yield feed_dict
return r
def train_and_evaluate(exe,
train_exe,
valid_exe,
train_ds,
valid_ds,
test_ds,
train_prog,
valid_prog,
full_batch,
model,
metric,
epoch=20,
dev_count=1,
train_log_step=5,
eval_step=10000,
evaluator=None,
output_path=None):
"""train and evaluate"""
global_step = 0
log_path = os.path.join(output_path, "log")
_create_if_not_exist(log_path)
writer = SummaryWriter(log_path)
best_model = 0
if full_batch:
train_iter = _full_batch_wapper(train_ds)
else:
train_iter = train_ds.generator
for e in range(epoch):
ret_sum_loss = 0
per_step = 0
scores = []
labels = []
for feed_dict in tqdm.tqdm(
multi_device(train_iter(), dev_count), desc='Epoch %s' % e):
if dev_count > 1:
ret = train_exe.run(feed=feed_dict, fetch_list=metric.vars)
ret = [[np.mean(v)] for v in ret]
else:
ret = train_exe.run(
train_prog,
feed=feed_dict,
fetch_list=[model.loss, model.logits, model.labels]
#fetch_list=metric.vars
)
scores.append(ret[1])
labels.append(ret[2])
ret = [ret[0]]
ret = metric.parse(ret)
if global_step % train_log_step == 0:
for key, value in ret.items():
writer.add_scalar(
'train_' + key, value, global_step=global_step)
ret_sum_loss += ret['loss']
per_step += 1
global_step += 1
if global_step % eval_step == 0:
eval_ret = evaluate(model, exe, valid_ds, valid_prog, 1,
evaluator, "valid", full_batch)
test_eval_ret = evaluate(model, exe, test_ds, valid_prog, 1,
evaluator, "test", full_batch)
eval_ret.update(test_eval_ret)
sys.stderr.write(json.dumps(eval_ret, indent=4) + "\n")
for key, value in eval_ret.items():
writer.add_scalar(key, value, global_step=global_step)
if eval_ret["valid_acc"] > best_model:
F.io.save_persistables(
exe,
os.path.join(output_path, "checkpoint"), train_prog)
eval_ret["epoch"] = e
#eval_ret["step"] = global_step
with open(os.path.join(output_path, "best.txt"), "w") as f:
f.write(json.dumps(eval_ret, indent=2) + '\n')
best_model = eval_ret["valid_acc"]
scores = np.vstack(scores)
labels = np.vstack(labels)
ret = evaluator.eval(scores, labels, "train")
sys.stderr.write(json.dumps(ret, indent=4) + "\n")
#print(json.dumps(ret, indent=4) + "\n")
# Epoch End
sys.stderr.write("epoch:{}, average loss {}\n".format(e, ret_sum_loss /
per_step))
eval_ret = evaluate(model, exe, valid_ds, valid_prog, 1, evaluator,
"valid", full_batch)
test_eval_ret = evaluate(model, exe, test_ds, valid_prog, 1, evaluator,
"test", full_batch)
eval_ret.update(test_eval_ret)
sys.stderr.write(json.dumps(eval_ret, indent=4) + "\n")
for key, value in eval_ret.items():
writer.add_scalar(key, value, global_step=global_step)
if eval_ret["valid_acc"] > best_model:
F.io.save_persistables(exe,
os.path.join(output_path, "checkpoint"),
train_prog)
#eval_ret["step"] = global_step
eval_ret["epoch"] = e
with open(os.path.join(output_path, "best.txt"), "w") as f:
f.write(json.dumps(eval_ret, indent=2) + '\n')
best_model = eval_ret["valid_acc"]
writer.close()
device=0
model='gaan'
lr=0.001
drop=0.5
CUDA_VISIBLE_DEVICES=${device} \
python -u train.py \
--use_cuda 1 \
--num_workers 4 \
--output_path ./output/model \
--batch_size 1024 \
--test_batch_size 512 \
--epoch 100 \
--learning_rate ${lr} \
--full_batch 0 \
--model ${model} \
--drop_rate ${drop} \
--samples 8 8 8 \
--test_samples 20 20 20 \
--hidden_size 256
# Copyright (c) 2018 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""listwise model
"""
import torch
import os
import re
import time
import logging
from random import random
from functools import reduce, partial
# For downloading ogb
import ssl
ssl._create_default_https_context = ssl._create_unverified_context
# SSL
import numpy as np
import multiprocessing
import pgl
import paddle
import paddle.fluid as F
import paddle.fluid.layers as L
from args import parser
from utils.args import print_arguments, check_cuda
from utils.init import init_checkpoint, init_pretraining_params
from utils.to_undirected import to_undirected
from model import BaseGraph, MLPModel, SAGEModel, GAANModel, GATModel, GCNModel, GINModel
from dataloader.ogbn_arxiv_dataloader import ArxivDataGenerator
from monitor.train_monitor import train_and_evaluate, OgbEvaluator
from pgl.contrib.ogb.nodeproppred.dataset_pgl import PglNodePropPredDataset
log = logging.getLogger(__name__)
class Metric(object):
"""Metric"""
def __init__(self, **args):
self.args = args
@property
def vars(self):
""" fetch metric vars"""
values = [self.args[k] for k in self.args.keys()]
return values
def parse(self, fetch_list):
"""parse"""
tup = list(zip(self.args.keys(), [float(v[0]) for v in fetch_list]))
return dict(tup)
if __name__ == '__main__':
args = parser.parse_args()
print_arguments(args)
evaluator = OgbEvaluator()
train_prog = F.Program()
startup_prog = F.Program()
args.num_nodes = evaluator.num_nodes
if args.use_cuda:
dev_list = F.cuda_places()
place = dev_list[0]
dev_count = len(dev_list)
else:
place = F.CPUPlace()
dev_count = int(os.environ.get('CPU_NUM', multiprocessing.cpu_count()))
assert dev_count == 1, "The program not support multi devices now!"
dataset = PglNodePropPredDataset(name="ogbn-arxiv")
graph, label = dataset[0]
graph = to_undirected(graph)
if args.model is None:
Model = BaseGraph
elif args.model.upper() == "MLP":
Model = MLPModel
elif args.model.upper() == "SAGE":
Model = SAGEModel
elif args.model.upper() == "GAT":
Model = GATModel
elif args.model.upper() == "GCN":
Model = GCNModel
elif args.model.upper() == "GAAN":
Model = GAANModel
elif args.model.upper() == "GIN":
Model = GINModel
else:
raise ValueError("Not support {} model!".format(args.model))
with F.program_guard(train_prog, startup_prog):
with F.unique_name.guard():
if args.full_batch:
gw = pgl.graph_wrapper.StaticGraphWrapper(
name="graph", graph=graph, place=place)
else:
gw = pgl.graph_wrapper.GraphWrapper(
name="graph",
node_feat=graph.node_feat_info(),
edge_feat=graph.edge_feat_info())
log.info(gw.node_feat.keys())
graph_model = Model(args, gw)
test_prog = train_prog.clone(for_test=True)
opt = F.optimizer.Adam(learning_rate=args.learning_rate)
opt.minimize(graph_model.loss)
train_ds = ArxivDataGenerator(
phase="train",
graph_wrapper=graph_model.graph_wrapper,
num_workers=args.num_workers,
batch_size=args.batch_size,
samples=args.samples)
valid_ds = ArxivDataGenerator(
phase="valid",
graph_wrapper=graph_model.graph_wrapper,
num_workers=args.num_workers,
batch_size=args.test_batch_size,
samples=args.test_samples)
test_ds = ArxivDataGenerator(
phase="test",
graph_wrapper=graph_model.graph_wrapper,
num_workers=args.num_workers,
batch_size=args.test_batch_size,
samples=args.test_samples)
exe = F.Executor(place)
exe.run(startup_prog)
if args.full_batch:
gw.initialize(place)
if args.init_pretraining_params is not None:
init_pretraining_params(
exe, args.init_pretraining_params, main_program=startup_prog)
metric = Metric(**graph_model.metrics)
nccl2_num_trainers = 1
nccl2_trainer_id = 0
if dev_count > 1:
exec_strategy = F.ExecutionStrategy()
exec_strategy.num_threads = dev_count
train_exe = F.ParallelExecutor(
use_cuda=args.use_cuda,
loss_name=graph_model.loss.name,
exec_strategy=exec_strategy,
main_program=train_prog,
num_trainers=nccl2_num_trainers,
trainer_id=nccl2_trainer_id)
test_exe = exe
else:
train_exe, test_exe = exe, exe
train_and_evaluate(
exe=exe,
train_exe=train_exe,
valid_exe=test_exe,
train_ds=train_ds,
valid_ds=valid_ds,
test_ds=test_ds,
train_prog=train_prog,
valid_prog=test_prog,
full_batch=args.full_batch,
train_log_step=5,
output_path=args.output_path,
dev_count=dev_count,
model=graph_model,
epoch=args.epoch,
eval_step=1000000,
evaluator=evaluator,
metric=metric)
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""utils"""
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Arguments for configuration."""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import six
import os
import sys
import argparse
import logging
import paddle.fluid as fluid
log = logging.getLogger(__name__)
def prepare_logger(logger, debug=False, save_to_file=None):
"""doc"""
formatter = logging.Formatter(
fmt='[%(levelname)s] %(asctime)s [%(filename)12s:%(lineno)5d]:\t%(message)s'
)
#console_hdl = logging.StreamHandler()
#console_hdl.setFormatter(formatter)
#logger.addHandler(console_hdl)
if save_to_file is not None and not os.path.exists(save_to_file):
file_hdl = logging.FileHandler(save_to_file)
file_hdl.setFormatter(formatter)
logger.addHandler(file_hdl)
logger.setLevel(logging.DEBUG)
logger.propagate = False
def str2bool(v):
"""doc"""
# because argparse does not support to parse "true, False" as python
# boolean directly
return v.lower() in ("true", "t", "1")
class ArgumentGroup(object):
"""doc"""
def __init__(self, parser, title, des):
self._group = parser.add_argument_group(title=title, description=des)
def add_arg(self,
name,
type,
default,
help,
positional_arg=False,
**kwargs):
"""doc"""
prefix = "" if positional_arg else "--"
type = str2bool if type == bool else type
self._group.add_argument(
prefix + name,
default=default,
type=type,
help=help + ' Default: %(default)s.',
**kwargs)
def print_arguments(args):
"""doc"""
log.info('----------- Configuration Arguments -----------')
for arg, value in sorted(six.iteritems(vars(args))):
log.info('%s: %s' % (arg, value))
log.info('------------------------------------------------')
def check_cuda(use_cuda, err= \
"\nYou can not set use_cuda=True in the model because you are using paddlepaddle-cpu.\n \
Please: 1. Install paddlepaddle-gpu to run your models on GPU or 2. Set use_cuda=False to run models on CPU.\n"
):
"""doc"""
try:
if use_cuda == True and fluid.is_compiled_with_cuda() == False:
log.error(err)
sys.exit(1)
except Exception as e:
pass
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""paddle init"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
from __future__ import unicode_literals
from __future__ import absolute_import
import os
import six
import ast
import copy
import logging
import numpy as np
import paddle.fluid as fluid
log = logging.getLogger(__name__)
def cast_fp32_to_fp16(exe, main_program):
"""doc"""
log.info("Cast parameters to float16 data format.")
for param in main_program.global_block().all_parameters():
if not param.name.endswith(".master"):
param_t = fluid.global_scope().find_var(param.name).get_tensor()
data = np.array(param_t)
if param.name.startswith("encoder_layer") \
and "layer_norm" not in param.name:
param_t.set(np.float16(data).view(np.uint16), exe.place)
#load fp32
master_param_var = fluid.global_scope().find_var(param.name +
".master")
if master_param_var is not None:
master_param_var.get_tensor().set(data, exe.place)
def init_checkpoint(exe, init_checkpoint_path, main_program, use_fp16=False):
"""init"""
assert os.path.exists(
init_checkpoint_path), "[%s] cann't be found." % init_checkpoint_path
def existed_persitables(var):
"""existed"""
if not fluid.io.is_persistable(var):
return False
return os.path.exists(os.path.join(init_checkpoint_path, var.name))
fluid.io.load_vars(
exe,
init_checkpoint_path,
main_program=main_program,
predicate=existed_persitables)
log.info("Load model from {}".format(init_checkpoint_path))
if use_fp16:
cast_fp32_to_fp16(exe, main_program)
def init_pretraining_params(exe,
pretraining_params_path,
main_program,
use_fp16=False):
"""init"""
assert os.path.exists(pretraining_params_path
), "[%s] cann't be found." % pretraining_params_path
def existed_params(var):
"""doc"""
if not isinstance(var, fluid.framework.Parameter):
return False
return os.path.exists(os.path.join(pretraining_params_path, var.name))
fluid.io.load_vars(
exe,
pretraining_params_path,
main_program=main_program,
predicate=existed_params)
log.info("Load pretraining parameters from {}.".format(
pretraining_params_path))
if use_fp16:
cast_fp32_to_fp16(exe, main_program)
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""Arguments for configuration."""
from __future__ import absolute_import
from __future__ import unicode_literals
import paddle.fluid as fluid
import pgl
import numpy as np
def to_undirected(graph):
inv_edges = np.zeros(graph.edges.shape)
inv_edges[:, 0] = graph.edges[:, 1]
inv_edges[:, 1] = graph.edges[:, 0]
edges = np.vstack((graph.edges, inv_edges))
g = pgl.graph.Graph(num_nodes=graph.num_nodes, edges=edges)
for k, v in graph._edge_feat.items():
g._edge_feat[k] = np.vstack((v, v))
for k, v in graph._node_feat.items():
g._node_feat[k] = v
return g
......@@ -21,3 +21,4 @@ from pgl import data_loader
from pgl import heter_graph
from pgl import heter_graph_wrapper
from pgl import contrib
from pgl import message_passing
......@@ -593,7 +593,7 @@ class Graph(object):
edges = self._edges[eid]
else:
edges = np.array(edges, dtype="int64")
sub_edges = graph_kernel.map_edges(
np.arange(
len(edges), dtype="int64"), edges, reindex)
......
......@@ -321,3 +321,43 @@ def alias_sample_build_table(np.ndarray[np.float64_t, ndim=1] probs):
if alias[l_i] < 1:
smaller_num.push_back(l_i)
return alias, events
@cython.boundscheck(False)
@cython.wraparound(False)
def extract_edges_from_nodes(
np.ndarray[np.int64_t, ndim=1] adj_indptr,
np.ndarray[np.int64_t, ndim=1] sorted_v,
np.ndarray[np.int64_t, ndim=1] sorted_eid,
vector[long long] sampled_nodes,
):
"""
Extract all eids of given sampled_nodes for the origin graph.
ret_edge_index: edge ids between sampled_nodes.
Refers: https://github.com/GraphSAINT/GraphSAINT
"""
cdef long long i, v, j
cdef long long num_v_orig, num_v_sub
cdef long long start_neigh, end_neigh
cdef vector[int] _arr_bit
cdef vector[long long] ret_edge_index
num_v_orig = adj_indptr.size-1
_arr_bit = vector[int](num_v_orig,-1)
num_v_sub = sampled_nodes.size()
i = 0
with nogil:
while i < num_v_sub:
_arr_bit[sampled_nodes[i]] = i
i = i + 1
i = 0
while i < num_v_sub:
v = sampled_nodes[i]
start_neigh = adj_indptr[v]
end_neigh = adj_indptr[v+1]
j = start_neigh
while j < end_neigh:
if _arr_bit[sorted_v[j]] > -1:
ret_edge_index.push_back(sorted_eid[j])
j = j + 1
i = i + 1
return ret_edge_index
......@@ -15,10 +15,10 @@
graph neural networks.
"""
import paddle.fluid as fluid
from pgl import graph_wrapper
from pgl.utils import paddle_helper
from pgl import message_passing
__all__ = ['gcn', 'gat', 'gin']
__all__ = ['gcn', 'gat', 'gin', 'gaan', 'gen_conv']
def gcn(gw, feature, hidden_size, activation, name, norm=None):
......@@ -258,3 +258,149 @@ def gin(gw,
bias_attr=fluid.ParamAttr(name="%s_b_1" % name))
return output
def gaan(gw, feature, hidden_size_a, hidden_size_v, hidden_size_m, hidden_size_o, heads, name):
"""Implementation of GaAN"""
def send_func(src_feat, dst_feat, edge_feat):
# 计算每条边上的注意力分数
# E * (M * D1), 每个 dst 点都查询它的全部邻边的 src 点
feat_query, feat_key = dst_feat['feat_query'], src_feat['feat_key']
# E * M * D1
old = feat_query
feat_query = fluid.layers.reshape(feat_query, [-1, heads, hidden_size_a])
feat_key = fluid.layers.reshape(feat_key, [-1, heads, hidden_size_a])
# E * M
alpha = fluid.layers.reduce_sum(feat_key * feat_query, dim=-1)
return {'dst_node_feat': dst_feat['node_feat'],
'src_node_feat': src_feat['node_feat'],
'feat_value': src_feat['feat_value'],
'alpha': alpha,
'feat_gate': src_feat['feat_gate']}
def recv_func(message):
# 每条边的终点的特征
dst_feat = message['dst_node_feat']
# 每条边的出发点的特征
src_feat = message['src_node_feat']
# 每个中心点自己的特征
x = fluid.layers.sequence_pool(dst_feat, 'average')
# 每个中心点的邻居的特征的平均值
z = fluid.layers.sequence_pool(src_feat, 'average')
# 计算 gate
feat_gate = message['feat_gate']
g_max = fluid.layers.sequence_pool(feat_gate, 'max')
g = fluid.layers.concat([x, g_max, z], axis=1)
g = fluid.layers.fc(g, heads, bias_attr=False, act="sigmoid")
# softmax
alpha = message['alpha']
alpha = paddle_helper.sequence_softmax(alpha) # E * M
feat_value = message['feat_value'] # E * (M * D2)
old = feat_value
feat_value = fluid.layers.reshape(feat_value, [-1, heads, hidden_size_v]) # E * M * D2
feat_value = fluid.layers.elementwise_mul(feat_value, alpha, axis=0)
feat_value = fluid.layers.reshape(feat_value, [-1, heads*hidden_size_v]) # E * (M * D2)
feat_value = fluid.layers.lod_reset(feat_value, old)
feat_value = fluid.layers.sequence_pool(feat_value, 'sum') # N * (M * D2)
feat_value = fluid.layers.reshape(feat_value, [-1, heads, hidden_size_v]) # N * M * D2
output = fluid.layers.elementwise_mul(feat_value, g, axis=0)
output = fluid.layers.reshape(output, [-1, heads * hidden_size_v]) # N * (M * D2)
output = fluid.layers.concat([x, output], axis=1)
return output
# feature N * D
# 计算每个点自己需要发送出去的内容
# 投影后的特征向量
# N * (D1 * M)
feat_key = fluid.layers.fc(feature, hidden_size_a * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_key'))
# N * (D2 * M)
feat_value = fluid.layers.fc(feature, hidden_size_v * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_value'))
# N * (D1 * M)
feat_query = fluid.layers.fc(feature, hidden_size_a * heads, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_query'))
# N * Dm
feat_gate = fluid.layers.fc(feature, hidden_size_m, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_gate'))
# send 阶段
message = gw.send(
send_func,
nfeat_list=[('node_feat', feature), ('feat_key', feat_key), ('feat_value', feat_value),
('feat_query', feat_query), ('feat_gate', feat_gate)],
efeat_list=None,
)
# 聚合邻居特征
output = gw.recv(message, recv_func)
output = fluid.layers.fc(output, hidden_size_o, bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_project_output'))
output = fluid.layers.leaky_relu(output, alpha=0.1)
output = fluid.layers.dropout(output, dropout_prob=0.1)
return output
def gen_conv(gw,
feature,
name,
beta=None):
"""Implementation of GENeralized Graph Convolution (GENConv), see the paper
"DeeperGCN: All You Need to Train Deeper GCNs" in
https://arxiv.org/pdf/2006.07739.pdf
Args:
gw: Graph wrapper object (:code:`StaticGraphWrapper` or :code:`GraphWrapper`)
feature: A tensor with shape (num_nodes, feature_size).
beta: [0, +infinity] or "dynamic" or None
name: deeper gcn layer names.
Return:
A tensor with shape (num_nodes, feature_size)
"""
if beta == "dynamic":
beta = fluid.layers.create_parameter(
shape=[1],
dtype='float32',
default_initializer=
fluid.initializer.ConstantInitializer(value=1.0),
name=name + '_beta')
# message passing
msg = gw.send(message_passing.copy_send, nfeat_list=[("h", feature)])
output = gw.recv(msg, message_passing.softmax_agg(beta))
# msg norm
output = message_passing.msg_norm(feature, output, name)
output = feature + output
output = fluid.layers.fc(output,
feature.shape[-1],
bias_attr=False,
act="relu",
param_attr=fluid.ParamAttr(name=name + '_weight1'))
output = fluid.layers.fc(output,
feature.shape[-1],
bias_attr=False,
param_attr=fluid.ParamAttr(name=name + '_weight2'))
return output
# Copyright (c) 2019 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""This package implements some common message passing
functions to help building graph neural networks.
"""
import numpy as np
import paddle
import paddle.fluid as fluid
import paddle.fluid.layers as L
from pgl.utils import paddle_helper
__all__ = ['copy_send', 'weighted_copy_send', 'mean_recv',
'sum_recv', 'max_recv', 'lstm_recv', 'graphsage_sum',
'graphsage_mean', 'pinsage_mean', 'pinsage_sum',
'softmax_agg', 'msg_norm']
def copy_send(src_feat, dst_feat, edge_feat):
"""doc"""
return src_feat["h"]
def weighted_copy_send(src_feat, dst_feat, edge_feat):
"""doc"""
return src_feat["h"] * edge_feat["weight"]
def mean_recv(feat):
"""doc"""
return fluid.layers.sequence_pool(feat, pool_type="average")
def sum_recv(feat):
"""doc"""
return fluid.layers.sequence_pool(feat, pool_type="sum")
def max_recv(feat):
"""doc"""
return fluid.layers.sequence_pool(feat, pool_type="max")
def lstm_recv(hidden_dim):
"""doc"""
def lstm_recv_inside(feat):
forward, _ = fluid.layers.dynamic_lstm(
input=feat, size=hidden_dim * 4, use_peepholes=False)
output = fluid.layers.sequence_last_step(forward)
return output
return lstm_recv_inside
def graphsage_sum(gw, feature, hidden_size, act, initializer, learning_rate, name):
"""doc"""
msg = gw.send(copy_send, nfeat_list=[("h", feature)])
neigh_feature = gw.recv(msg, sum_recv)
self_feature = feature
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
return output
def graphsage_mean(gw, feature, hidden_size, act, initializer, learning_rate, name):
"""doc"""
msg = gw.send(copy_send, nfeat_list=[("h", feature)])
neigh_feature = gw.recv(msg, mean_recv)
self_feature = feature
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
return output
def pinsage_mean(gw, feature, hidden_size, act, initializer, learning_rate, name):
"""doc"""
msg = gw.send(weighted_copy_send, nfeat_list=[("h", feature)], efeat_list=["weight"])
neigh_feature = gw.recv(msg, mean_recv)
self_feature = feature
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
return output
def pinsage_sum(gw, feature, hidden_size, act, initializer, learning_rate, name):
"""doc"""
msg = gw.send(weighted_copy_send, nfeat_list=[("h", feature)], efeat_list=["weight"])
neigh_feature = gw.recv(msg, sum_recv)
self_feature = feature
self_feature = fluid.layers.fc(self_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_l.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_l.b_0"
)
neigh_feature = fluid.layers.fc(neigh_feature,
hidden_size,
act=act,
param_attr=fluid.ParamAttr(name=name + "_r.w_0", initializer=initializer,
learning_rate=learning_rate),
bias_attr=name+"_r.b_0"
)
output = fluid.layers.concat([self_feature, neigh_feature], axis=1)
output = fluid.layers.l2_normalize(output, axis=1)
return output
def softmax_agg(beta):
"""Implementation of softmax_agg aggregator, see more information in the paper
"DeeperGCN: All You Need to Train Deeper GCNs"
(https://arxiv.org/pdf/2006.07739.pdf)
Args:
msg: the received message, lod-tensor, (batch_size, seq_len, hidden_size)
beta: Inverse Temperature
Return:
An output tensor with shape (num_nodes, hidden_size)
"""
def softmax_agg_inside(msg):
alpha = paddle_helper.sequence_softmax(msg, beta)
msg = msg * alpha
return fluid.layers.sequence_pool(msg, "sum")
return softmax_agg_inside
def msg_norm(x, msg, name):
"""Implementation of message normalization, see more information in the paper
"DeeperGCN: All You Need to Train Deeper GCNs"
(https://arxiv.org/pdf/2006.07739.pdf)
Args:
x: centre node feature (num_nodes, feature_size)
msg: neighbor node feature (num_nodes, feature_size)
name: name for s
Return:
An output tensor with shape (num_nodes, feature_size)
"""
s = fluid.layers.create_parameter(
shape=[1],
dtype='float32',
default_initializer=
fluid.initializer.ConstantInitializer(value=1.0),
name=name + '_s_msg_norm')
msg = fluid.layers.l2_normalize(msg, axis=1)
x_norm = fluid.layers.reduce_sum(x * x, dim=1, keep_dim=True)
msg = msg * x_norm * s
return msg
......@@ -24,7 +24,7 @@ from pgl import graph_kernel
__all__ = [
'graphsage_sample', 'node2vec_sample', 'deepwalk_sample',
'metapath_randomwalk', 'pinsage_sample'
'metapath_randomwalk', 'pinsage_sample', 'graph_saint_random_walk_sample'
]
......@@ -55,7 +55,7 @@ def edge_hash(src, dst):
def graphsage_sample(graph, nodes, samples, ignore_edges=[]):
"""Implement of graphsage sample.
Reference paper: https://cs.stanford.edu/people/jure/pubs/graphsage-nips17.pdf.
Args:
......@@ -63,7 +63,7 @@ def graphsage_sample(graph, nodes, samples, ignore_edges=[]):
nodes: Sample starting from nodes
samples: A list, number of neighbors in each layer
ignore_edges: list of edge(src, dst) will be ignored.
Return:
A list of subgraphs
"""
......@@ -129,7 +129,7 @@ def alias_sample(size, alias, events):
size: Output shape.
alias: The alias table build by `alias_sample_build_table`.
events: The events table build by `alias_sample_build_table`.
Return:
samples: The generated random samples.
"""
......@@ -283,13 +283,13 @@ def metapath_randomwalk(graph,
Args:
graph: instance of pgl heterogeneous graph
start_nodes: start nodes to generate walk
metapath: meta path for sample nodes.
metapath: meta path for sample nodes.
e.g: "c2p-p2a-a2p-p2c"
walk_length: the walk length
Return:
a list of metapath walks.
a list of metapath walks.
"""
edge_types = metapath.split('-')
......@@ -390,18 +390,18 @@ def pinsage_sample(graph,
norm_bais=1.0,
ignore_edges=set()):
"""Implement of graphsage sample.
Reference paper: .
Args:
graph: A pgl graph instance
nodes: Sample starting from nodes
samples: A list, number of neighbors in each layer
top_k: select the top_k visit count nodes to construct the edges
proba: the probability to return the origin node
top_k: select the top_k visit count nodes to construct the edges
proba: the probability to return the origin node
norm_bais: the normlization for the visit count
ignore_edges: list of edge(src, dst) will be ignored.
Return:
A list of subgraphs
"""
......@@ -476,3 +476,43 @@ def pinsage_sample(graph,
layer_nodes[0], dtype="int64")
return subgraphs
def extract_edges_from_nodes(graph, sample_nodes):
eids = graph_kernel.extract_edges_from_nodes(
graph.adj_src_index._indptr, graph.adj_src_index._sorted_v,
graph.adj_src_index._sorted_eid, sample_nodes)
return eids
def graph_saint_random_walk_sample(graph,
nodes,
max_depth,
alias_name=None,
events_name=None):
"""Implement of graph saint random walk sample.
First, this function will get random walks path for given nodes and depth.
Then, it will create subgraph from all sampled nodes.
Reference Paper: https://arxiv.org/abs/1907.04931
Args:
graph: A pgl graph instance
nodes: Walk starting from nodes
max_depth: Max walking depth
Return:
a subgraph of sampled nodes.
"""
graph.outdegree()
walks = deepwalk_sample(graph, nodes, max_depth, alias_name, events_name)
sample_nodes = []
for walk in walks:
sample_nodes.extend(walk)
sample_nodes = np.unique(sample_nodes)
eids = extract_edges_from_nodes(graph, sample_nodes)
subgraph = graph.subgraph(
nodes=sample_nodes, eid=eids, with_node_feat=True, with_edge_feat=True)
subgraph.node_feat["index"] = np.array(sample_nodes, dtype="int64")
return subgraph
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""graph saint sample test
"""
from __future__ import division
from __future__ import absolute_import
from __future__ import print_function
from __future__ import unicode_literals
import unittest
import numpy as np
import pgl
import paddle.fluid as fluid
from pgl.sample import graph_saint_random_walk_sample
class GraphSaintSampleTest(unittest.TestCase):
"""GraphSaintSampleTest"""
def test_randomwalk_sampler(self):
"""test_randomwalk_sampler"""
g = pgl.graph.Graph(
num_nodes=8,
edges=[(1, 2), (2, 3), (0, 2), (0, 1), (6, 7), (4, 5), (6, 4),
(7, 4), (3, 4)])
subgraph = graph_saint_random_walk_sample(g, [6, 7], 2)
print('reindex', subgraph._from_reindex)
print('subedges', subgraph.edges)
assert len(subgraph.nodes) == 4
assert len(subgraph.edges) == 4
true_edges = np.array([[0, 1], [2, 3], [2, 0], [3, 0]])
assert "{}".format(subgraph.edges) == "{}".format(true_edges)
if __name__ == '__main__':
unittest.main()
# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
""" Log writer setup: interface for training visualization.
"""
import six
LogWriter = None
if six.PY3:
# We highly recommend using VisualDL (https://github.com/PaddlePaddle/VisualDL)
# for training visualization in Python 3.
from visualdl import LogWriter
LogWriter = LogWriter
elif six.PY2:
from tensorboardX import SummaryWriter
LogWriter = SummaryWriter
else:
raise ValueError("Not running on Python2 or Python3 ?")
......@@ -25,7 +25,7 @@ except:
import numpy as np
import time
import paddle.fluid as fluid
from queue import Queue
from multiprocessing import Queue
import threading
......
......@@ -185,7 +185,7 @@ def lod_constant(name, value, lod, dtype):
return output, data_initializer
def sequence_softmax(x):
def sequence_softmax(x, beta=None):
"""Compute sequence softmax over paddle LodTensor
This function compute softmax normalization along with the length of sequence.
......@@ -194,10 +194,15 @@ def sequence_softmax(x):
Args:
x: The input variable which is a LodTensor.
beta: Inverse Temperature
Return:
Output of sequence_softmax
"""
if beta is not None:
x = x * beta
x_max = fluid.layers.sequence_pool(x, "max")
x_max = fluid.layers.sequence_expand_as(x_max, x)
x = x - x_max
......
......@@ -4,3 +4,5 @@ cython >= 0.25.2
#paddlepaddle
redis-py-cluster
visualdl >= 2.0.0b ; python_version >= "3"
......@@ -42,8 +42,8 @@
" d = 16\n",
" feature = np.random.randn(num_node, d).astype(\"float32\")\n",
" #feature = np.array(feature, dtype=\"float32\")\n",
" # 对于边,也同样可以用一个特征向量表示。\n",
" edge_feature = np.random.randn(len(edge_list), d).astype(\"float32\")\n",
" # 对于边,也同样可以用边的权重作为边特征\n",
" edge_feature = np.random.randn(len(edge_list), 1).astype(\"float32\")\n",
" \n",
" # 根据节点,边以及对应的特征向量,创建一个完整的图网络。\n",
" # 在PGL中,节点特征和边特征都是存储在一个dict中。\n",
......@@ -99,7 +99,7 @@
"outputs": [
{
"data": {
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcUAAAE1CAYAAACWU/udAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAIABJREFUeJzs3Xd8zWf/x/HXOScnO6GEGiElxApBEkQitbporVaHlppVWru2Wylt1S5qtkRbe7WUalUQMxJbiMbeI4iMk7O/vz9Cfh1ozsnZuZ73I4/eJd/r+ijyznV9ryGTJElCEARBEATk9i5AEARBEByFCEVBEARBeEiEoiAIgiA8JEJREARBEB4SoSgIgiAID4lQFARBEISHRCgKgiAIwkMiFAVBEAThIRGKgiAIgvCQCEVBEARBeEiEoiAIgiA8JEJREARBEB4SoSgIgiAID4lQFARBEISHRCgKgiAIwkMiFAVBEAThITd7FyAIgmAvOoOOU3dOkXInhRxtDhISvu6+1CxVk1qlaqFUKO1domBjIhQFQShSMjWZLD26lPmH5pN2Nw1PN08ADJIBALlMjgwZar2aKiWq0Du8N+/XfZ/insXtWbZgIzJJkiR7FyEIgmBt93LvMWzbMJafWI5cJidHl1Og53yUPhgkA2/VeoupL04lwDvAypUK9iRCURAEl/dz6s90/bkrubpcNAaNWW24K9zxcvNicdvFdKjRwcIVCo5ChKIgCC7LYDTQ+5ferDi5ApVOZZE2vZXevF7jdZa0XYJCrrBIm4LjEKEoCIJLMkpG3lrzFlvObrFYID7irfTmhcovsO7NdSIYXYzYkiEIgkv6aPNHVglEAJVOxbbz2+i1qZfF2xbsS4SiIAgu5/dzv/P98e+tEoiPqHQqVqWsYkvaFqv1IdiemD4VBMGlZGoyCZ4VTLoq3Sb9lfAqwbn+58SWDRchRoqCILiUCQkTyNZm26y/HG0OY3eMtVl/gnWJkaIgCC5Do9dQakopsrRZNu3XR+nDnaF38FJ62bRfwfLESFEQBJex9tRaJGz/fb5MJmNVyiqb9ytYnghFQRBcxrdHvjVv6jQRWABMADaY/ni2NptFhxeZ/qDgcMTZp4IguIyjN4+a96AfEAucA3TmNXH81nEkSUImk5nXgOAQRCgKguASbmbfJFeXa97DNR/+8zpmh6LBaOBK5hUqFqtoXgMOTKPXcPjGYQ7dOETCpQQuZFxAo9fgrnCnnF85YoNiCS8bTkS5CPw8/OxdbqGIUBQEwSWkpqfi6eZp9tmmheWucCc1PdWlQvH8/fPMSpzFd0e+Qy6TozVoUevVf/ucQzcO8du53/B080Rr0NKxZkeGRA0hrEyYnaouHBGKgiC4BGtu1C8ICcnuNVhKhjqDvpv7siF1AwajAZ3x6cNnrUGL1qAFYPmJ5aw7vY4G5RrwQ4cfCPQPtEXJFiMW2giC4BJk2P9dnlzm/F9St57dSvCsYNafXo9ar/7PQPwng2RApVOx58oeasypwXdHvsOZdv45/++gIAgC4O/hb5ftGI/IkOHn7tzv074+8DWvr36de7n3Cj0NrTfqydZl0//X/ny85WOnCUYxfSoIgksILR1q/kIbA2AEpIcfOvKGDCZcgJGpymT+Z/M5HHqYOnXqEBYWRunSpc2rxw5mJc5iVPwoi08Bq3Qq4o7FYZSMzG091+FX54oTbQRBcBllp5blZs5N0x/cAez6x489DzQreBPFlcWZVmYax44d4/jx4xw7dgx3d3fq1KmT/xEWFkb16tXx8PAwvUYr+v3c77Rf1d6q70S9ld5MajmJfg36Wa0PSxChKAiCy2i/sj0/nfnJLn2/UuUVtrz7/zdmSJLE9evX80Py0ce5c+eoUqVKfkg+CsyyZcvaZRT1QP2A4FnB3M29a/W+vJXenOhzgsrPVLZ6X+YSoSgIgsv4/dzvvL76dZseCA7g5+7HitdX0Dqk9X9+rlqt5vTp0/mjyUf/lCTpbyFZp04datWqhaenp1Vr77KhC2tOrfnXVgtrUMgU1C9bn8SeiQ47jSpCURAEl2GUjAROD+RG9g2b9lvKuxQ3htxAITfhJeRfSJLErVu3/jWq/PPPP6lUqdK/pmADAwMtEioXMy5S45saNgnER3zdfdn49kaaVTJhbtqGxEIbQRBchlwm5/3g95l8eDJGhdEmfXorvRnaeKjZgQh5B4qXKVOGMmXK8NJLL+X/uFarJTU1NT8kZ8+ezfHjx1Gr1X8LyUejSh8fH5P6/SbpG4ySbf47PZKtzWbKvikOG4pipCgIgkvIyMhg3Lhx/Lj8R5T9lNwy3rL6Fg0ZMqqWrMrJPidRKpRW7euvbt++zYkTJ/42skxNTSUwMPBfU7DPPffcY0eVWoOWgMkB5l2ztQ64AGgBXyAaCC/4454KT84NOEc5v3Km921lYqQoCIJTMxqNLFmyhNGjR9OmTRtOp5zmDneIWBhBrt7MLRoF5OnmyZqOa2waiAClS5emRYsWtGjRIv/HdDodf/75Z35ILly4kGPHjpGVlUXt2rX/NrIMDQ3lTNYZ8wtoArQlL0HuAHFAWaCAGadUKNl1cRfv1H7H/BqsRIwUBUFwWomJifTr1w+FQsHs2bOJiIjI/7n5yfMZ/NtgqwWjt9KbSS0m0a+hY28xuHv3LidOnPjbwp5Tp07h3cSb+w3uY1AYCtdBOnmh+DIQWrBHZMj4uMHHzHplVuH6tgIRioIgOJ1bt24xYsQIfvvtNyZNmsR7772HXP7vA7om753M+F3jLb7/zlvpzciYkYyJHWPRdm3FYDDQ4YcObLy00fxGfgGOAnqgDNANMGH7Zb0y9Tjc+7D5/VuJOOZNEASnodPpmD59OrVq1SIgIIDU1FS6dOny2EAEGBY9jK9f/hovNy+LnEsql8nxcvNi6gtTnTYQARQKBXeNhdyX+CowirwwrIHJL+NsvUK4oMQ7RUEQnMK2bdsYMGAAFSpUYM+ePVSvXr1Az/Ws35Png57nrbVv8efdP8nR5ZjVv4/Sh+BnglndcTXVAqqZ1YYjscg2DDkQBBwHkoBGBX9UZzDz4korE6EoCIJDu3DhAkOGDOHo0aPMmDGDNm3amLxHr2rJqiT1SuL7Y9/zxZ4vuJF1g1x97n9uR5Ahw1vpzbM+zzIiZgTd63Uv1NYLR+KhsOBRc0bgvmmPuMkdM37E9KkgCA5JpVLx6aefEhERQf369Tl16hRt27Y1e9O6Qq6gW71u/Pnxn/zR5Q961OtBcLFgMORtKPf38Mffwx9fd1+UciXVSlajW71u/N75d872P0uv8F4uE4gAFYpVMO/BbOAEoCEvDM8CJ4FKpjVTyruUef1bmWNGtSAIRZYkSaxbt44hQ4bQqFEjjhw5QsWKlrvNXiaT0SiwEY0CG7F582ZmzprJN8u/IUebg4SEj9KHys9Utvk2C1uLqRjDxjMbTV+dKwOSyVtoIwHFyVt5WrDZ7HzRFaNNe8BGRCgKguAwUlJS6N+/P7dv3yYuLo5mzax76klKSgq1a9UmpGSIVftxROFlw1EqlKaHog95i2sKwdfdl8YVGheuESsR06eCINhdRkYGAwcOpGnTprRr144jR45YPRAhLxRr1apl9X4cUb2y9dAb9XbpW2/UExsUa5e+/4sIRUEQ7MZoNPLdd99RvXp1VCoVp06dol+/fri52WYSqyiHoqebJ13qdLHLgpfIcpE8V/w5m/dbEGLzviAIdpGYmMjHH3+Mm5vbv06jsQWj0Yi/vz/Xr1/H39/fpn07ijPpZ6i3oJ7Vj8P7K193X1a+vrJA12zZgxgpCoJgU7du3aJbt260b9+efv36sXfvXpsHIsDFixcpUaJEkQ1EgGoB1WheqTnucneb9CdDRnm/8rxc5WWb9GcOEYqCINiEqafRWFtRnjr9q8VtF+PpZt2LjB95dIC6I29tEatPBUGwum3bttG/f3+CgoJMOo3Gmk6ePClCEXDTuFHpZCVOBJ+w6h2Uj+6drP1sbav1YQlipCgIgtVcuHCBDh060Lt3byZNmsSvv/7qEIEIYqQIcOzYMSIjI2n+bHOGxAzBW+ltlX683bx5Kfglxj4/1irtW5IIRUEQLM7Sp9FYQ0pKCqGhBbzryAWtWLGCli1bMmHCBKZPn85XL3zFR5EfWTwYvZXevFTlJVa9scoih7Jbm1h9KgiCxfzzNJqpU6dSoYKZx4lZkcFgwM/Pj9u3b+Pr62vvcmxKr9czfPhwfvrpJ9avX09YWNjffj7uaBz9tvRDbVAXah+jDBmebp6MjBnJ6NjRThGIIN4pCoJgIX89jWbp0qU0bdrU3iU90fnz53n22WeLXCDeuXOHt956C3d3d5KSkihRosS/Pqdr3a60rNySd9e/y6Hrhwp0cPo/+bn7EegfyOqOqwkt7VyjceeIbkEQHNbjTqNx5ECEovk+MSkpiYiICKKioti8efNjA/GRQP9Adr6/kz+6/EH76u3xUHjg7+GP/AmRIUOGn7sfnm6eNHuuGWs6ruFk35NOF4ggRoqCIJjJaDSyZMkSRo8eTdu2bTl16hSlSjnmzQf/VNRCcfHixQwfPpwFCxbQoUOHAj3z6OD0tW+uJV2Vzu5Lu0m8lkjCpQSuZV1Da9CilCsp5V2KmIoxNApsREzFGPNv33AQIhQFQTDZgQMH6NevH0qlks2bNxMeHm7vkkySkpLCyy877gZyS9FqtQwYMID4+Hh27dpFzZo1zWonwDuA9jXa075GewtX6HjE9KkgCAV28+ZNunbtSocOHejfvz979uxxukCEojFSvH79Os2aNeP69escPHjQ7EAsakQoCoLwnx6dRhMaGkqpUqVITU2lc+fOdjuNpjD0ej1paWnUqFHD3qVYzd69e2nQoAGvvPIKGzZsoFixYvYuyWmI6VNBcAI6g44Tt09w6Poh9lzZw8X7F9EYNLgr3KlQrAIxFWIILxdO2LNheLh5WLRvRzyNpjDOnTtH2bJl8fa2zkZ1e5IkiXnz5jF+/HiWLFlCq1at7F2S0xGhKAgO7PKDy8w5OIcFhxYgSRIGyYBKp/rX5204vQGlQonBaKBb3W70b9ifqiWrFqrvCxcuMGTIEI4dO8aMGTN47bXXHGrzvblc9Xi33Nxc+vbtS3JyMnv37qVKlSr2LskpOd/chyAUAZmaTLps6EK1OdX4OvFrMjWZZGmzHhuIALn6XDI1meToclhwaAF15teh3cp2pKvSTe5bpVIxduxYIiIiCA8PJyUlhTZt2rhEIIJrnmRz6dIlmjRpgkqlYv/+/SIQC0GEoiA4mG3nthE8K5g1KWtQ69VoDVqTntcZdaj1an49+ytVZlVh/en1BXpOkiTWrl1LjRo1OHPmDEePHmX06NF4etrmBgVbcbVFNvHx8TRs2JC3336blStXFrkDCSxNTJ8KggOZlTiLEX+MsMilr1qDFq1BS+cNnTl68yjjm45/4mjv0Wk0d+7ccfjTaAorJSWFkSNH2ruMQpMkienTpzNlyhSWLVtGixYt7F2SSxBnnwqCg5iVOIuR20c+cYq0MLyV3gxqNIiJzSf+7cczMjL49NNPWb58OZ9++ikffvghbm6u+72yTqfD39+f+/fvO/UIOCcnh549e5KWlsa6desICgqyd0kuQ0yfCoID+DXtV0b8McIqgQig0qmYcWAGPx7/Ecg7jebbb7+levXqqNVqTp06xccff+zSgQiQlpZGhQoVnDoQz507R1RUFB4eHuzevVsEooW59t8AQXACGeoM3tvwnkWmTJ9GpVPRZ3Mfit8vzvhPxjvtaTSF4ezvE3/99Ve6du3K2LFj6du3r8ssfnIkIhQFwc76bu5LjjbHJn3laHJ4/cfX+bb/t7z77rtOufm+MJw1FI1GI59//jnz589n3bp1xMTE2LsklyVCURDs6Ny9c2xI3YDGoLFJf5JMwu05N6o3r17kAhHyQrGgB2I7iszMTLp06cLt27dJSkqiXLly9i7JpRW9vxWC4EBmJc7CYDTYtE+1Qc20/dNs2qejcLaR4unTp2nQoAFly5Zl586dIhBtwKVWn0qSkdzcNNTqK0iSBplMiVJZCh+fWsjl7vYuTxD+Rq1XU2pKKbK12aY9qAc2A+eBXOAZoCVgwgE2nm6eXBt8jRJeT75Tz9VoNBqKFy9ORkYGHh6WPQrPGjZs2EDv3r2ZNGkS3bt3t3c5RYbTT59qtbe4fn0Rd+6sQ6U6jUzmhkz211+WEaNRjadnEM888xLly3+Mj49zn90ouIZD1w8hl5kxWWME/IGuQDEgDVgD9CEvIAvAXeHO7ku7aVu9ren9O6k///yToKAghw9Eg8HA2LFj+eGHH9i8eTORkZH2LqlIcdpQzMk5zfnzI7h373dAhiTlrdyTpMe/m8nNPUtu7kVu3lyMj08olSp9QYkSLW1YsSD83aEbh0w+rQYAd6DZX/69GlAcuEGBQzFHm8PBaweLVCg6w/Fu9+7do1OnTmg0GpKTkyldurS9SypynO6doiQZuHTpCw4dCufu3U1Ikjo/EP+bHqMxl6ysJE6ebMupU53R6x9YtV5BeJKESwmo9erCN5QN3AVMuPTeIBnYdWlX4ft2Io7+PvHYsWNERkZSs2ZNtm3bJgLRTpwqFLXaOyQn1+fSpS8wGnMB81+HGo0q0tPXkphYhaysI5YrUhAK6ELGhcI3YgDWAXUxKRQBrmVeK3z/TsSRQ3HFihW0bNmSCRMmMH36dJc/RMGROc1/eY3mJocPN0SrvYEk6SzSptGoxmhUc/RoLGFhf+Dv39Ai7QpCQWj0hdyGYQTWAwrAjGvzzJq6dWKOGIp6vZ5hw4bx888/88cffxAWFmbvkoo8pxgp6vVZHDkSjVZ73WKB+FcGQzbHjr1ATs5pi7ctCE/irijEimgJ2AjkAG+RF4wmclM4zffEhaZWq7l8+TJVqxbujklLun37Ni+88AKnTp0iKSlJBKKDcIpQTEvrh0ZzHUnSW60PgyGbkyc7YDRaPnQF4XHK+JYx/+FfgDvAO4DSvCZKeZs43+rEzpw5Q+XKlXF3d4ytWUlJSURGRtK4cWM2b95MiRJFZ2uMo3P4ULx37w/u3FmDJFlgQcJTSWg0l7l06Usr9yMIeWKDYs0bLWYAh4CbwFTg84cfxwvehAwZTYKamN63k3KkqdPFixfTqlUrZs6cyeeff45CYcYwX7Aah54/kSQDqaldMBqtc3PAPxmNKq5cmUTZst3w9Kxgkz6FoiuyXCSebp6mv9srDowrXN++7r40Kt+ocI04EUcIRa1Wy4ABA9ixYwcJCQnUqFHDrvUIj+fQI8W7d3/FYDDxtI9CkiQD1659Y9M+haIpsnyk3Ra76Iw6YioWnUOl7R2K169fp2nTpty4cYODBw+KQHRgDh2KV65MxmDIsmmfkqTl+vX5GI1Fa2WeYHv+Hv60r97evFNtCim6QjTl/cvbvF97OXnypN1Ccc+ePURGRtKqVSvWr1+Pv7+/XeoQCsZhQ1Gnu0tmZqJZzw4cCC++CK+8kvfRpYupLUjcv7/drL4FwRSfNP4ETzcbX3irhYpXK5KVZdtvOO1FpVJx7do1qlSpYtN+JUnim2++oUOHDixatIgxY8YUyZtJnI3DvlPMyjqEXO6FwczppQEDoHVr8/o2GFRkZiZSsuQr5jUgCAVUv2x96petz4GrB9Abrbe6+hEZMiqVrIQqSUWVKlUYPHgwH330Eb6+vlbv215SU1OpWrUqSqWZy3TNkJubS58+fTh8+DD79u2zeSAL5nPYb1uyspIxGGxz8eq/6XnwoGgdgSXYz7IOy/BQ2OaQak83Tza+t5GVK1YSHx/PkSNHCA4OZvLkyWRn2/b9va3Y+n3ipUuXiImJQa1Ws3//fhGITsZhQzEz8yB5d+SYZ9EiaNsWPv4Yjh41/XmxkV+wlYrFKjKp+SQURusuzfdWejMyZiS1SucFRK1atVi5ciXbt2/n0KFDBAcHM2XKFHJy7PXNqHXYMhTj4+Np2LAhnTp1YsWKFfj4+NikX8FyHDYUDQbzD+r+4ANYvhzWrIFXX4VRo+Caicc85p2tKgjWd//+fdaNWkfgvUC83byt0oeXmxcvBb/E6NjR//q50NBQVq1axfbt20lKSiI4OJipU6e6TDjaIhQlSWLq1Kl06tSJ5cuXM2TIEGQymVX7FKzDYUOxMGrWBG9vcHeHl1+G0FBING/NjiBY1cWLF4mOjiasThhpM9J4p/Y7eCstG4zeSm9ervIyq95Y9dSVrqGhoaxevZpt27aRmJhIcHAw06ZNQ6WyzT5ha7F2KObk5PDOO++wcuVKEhMTad68udX6EqzPYUNRobDcsmWZDCQTL9SQy228IlAocpKTk4mOjqZ3797MnDkTpZuSRa8tYmKziXi5eRV6q4YMGV5uXnwS9Qlr31yLUlGwhSa1a9dmzZo1/P777+zfv5/g4GCmT5/ulOGYk5PDzZs3CQ4Otkr7Z8+epVGjRnh5ebF7926CgoKs0o9gOw4bin5+kZhzqGN2Nhw8CFotGAywbRscPw4NGpjWjre32FwrWM+mTZto1aoVc+fOZcCAAfk/LpPJGBQ1iGMfHiPs2TB83c1bFerr7kvVklU50PMA45uNNytg69Spw9q1a9m6dSt79+4lODiYGTNmOFU4nj59mpCQEKscpbZlyxYaN25Mnz59WLx4MV5eXhbvQ7A9hw1Ff/9IFArTp5H0eli8GNq1y1tos2EDTJgAFUw6tU1B8eLPm9y3IBTEnDlz6N27N5s3b6Zt27aP/ZyqJauS/EEyG97aQMvKLfFQeODn7vfUdn2Vvni6eRJdIZrlHZZzqu8p6jxbp9D1hoWFsW7dOn799Vd2795NlSpVmDlzJrm5jv/e3RpTp0ajkQkTJtCrVy82bNhA3759xftDFyKTJFMnFm1Dq01n//5AJKmQd86ZQaHwp2bNFZQsacYldYLwBAaDgaFDh7J161Y2b95MpUqVCvzs1cyr7Ly4k/1X9rPn8h5u5txEZ9ChVCgJ8A4gukI0jSs0JjYolsrPVLbirwKOHDnCZ599RmJiIsOHD+eDDz5w2FHSsGHDKF68OKNGjbJIew8ePOD999/nzp07rFmzhnLlylmkXcFxOGwoAhw+HE1m5j6b96tQ+BMdfRu53DZ7xwTXp1KpeO+997h//z7r16/nmWeesXdJhXbkyBHGjx9PUlJSfjh6ejrWu/hWrVrRu3fvJ47ITXH69GnatWtHixYtmDlzpsNcQyVYlsNOnwJUrDgcheLpU0aWJpO5U65cbxGIgsXcvn2bZs2a4ePjw9atW10iEAHq1avHTz/9xKZNm9i+fTvBwcHMnj0btdra17wVnKWmT9evX09sbCzDhw9n7ty5IhBdmEOPFCXJwL595dDpbtusT7nciwYNTuPpKVaRCYWXmppK69atee+99xg3bpxLv3s6dOgQ48eP5/Dhw4wYMYKePXvaZOQoSRIXMi5w6Pohzt8/j8agwU3uhrfMmxFdR5B+Mh1fT/MWLBkMBv73v/+xbNky1q5dS2RkpIWrFxyNQ4ciwL17v3HyZAeb3KmoVoNG05Y2bTa49BcvwTYSEhLo2LEjX331FV27drV3OTaTnJzM+PHjOXLkCCNHjqRnz554eFh25kWSJBKvJTJ131R+PfsrAAqZglx9LnqjHjlylHIlOo0OuYec4GeCGdRoEO/WebfAK3rv3btHp06d0Gg0rFq1itKlS1v01yA4JocPRYBTp97lzp11Vl50I0MmC6R/f3+eey6Y+fPnU7ZsWSv2J7iy5cuXM3DgQFasWEGLFi3sXY5dJCUlMX78eI4dO8bIkSPp0aOHRcJx+/nt9Nnch+tZ18nV52KUjAV6zkfpg4TEh+EfMrH5RLyUT14cdOzYMTp06EC7du346quvcHNz2LsTBAtz6HeKj4SEzMXDoywymfX+YCoUPoSHb+HgwUPUqVOHsLAwfvjhB5zgewbBgUiSxOeff87IkSOJj48vsoEIEBkZyS+//MK6devYvHkzVatWZd68eWg05n1zm6XJottP3XhtxWuk3UsjR5dT4EAEyNHloNKpmJc8j6qzq7L/yv7Hft7y5ctp2bIlEydOZNq0aSIQixinGCkCaDTXOXSoATrdbSRJZ9G25XIfwsJ+o1ix6PwfO3z4MF27diUoKIgFCxaIpdfCf9LpdHz44YccPXqUTZs2iT8z/5CYmMj48eM5efIko0aNonv37gVesHLh/gVilsRwL/cear1lFvJ4uXnxeYvPGdRoEJD3+zds2DA2btzI+vXrCQsLs0g/gnNxipEigIdHOcLDk/HyqopcbpmzIWUyD9zcSlC37o6/BSJA/fr1SU5Opn79+tStW5elS5eKUaPwRA8ePKB169bcunWLXbt2iUB8jIYNG7JlyxZWr17Nzz//TNWqVVmwYAFa7dPvTD1//zyRiyK5mX3TYoEIkKvPZUz8GD7f/Tm3b9/mhRde4PTp0yQlJYlALMKcZqT4iNGo49KlL7hy5SuMRjVgXvlyuTclS7YmJGQBSuXTl8gfOXKErl27EhgYyMKFCylfvrxZfQqu6cqVK7Ru3ZqYmBhmzZolptsKaP/+/YwfP57U1FRGjRpF165d/zVyzFBnUOObGtzOuW3SVKkpPOWeeO/wpk/jPowfP94qR8IJzsNpRoqPyOVKKlX6lPr1EylR4mXkck9ksoK9vJfJ3JDLvfD1rUutWmupVWv1fwYi5O3HSkpKIjIykrp16xIXFydGjQKQ9w1TVFQU77//Pt98840IRBNERUWxdetWli9fzrp16wgJCWHRokXodP//euTDXz7kfu59qwUigNqoJqdZDr2H9haBKDjfSPGfNJrrXLs2lz17JlGxohyFwgNQkDeClAESRqMKD49AnnmmJeXLD8DXN9Ts/o4ePUrXrl0pV64cCxcuJDAw0EK/EsHZbNmyhffff5958+bxxhtv2Lscp7d3717Gjx9PWloao0ePpnR0ad7Z8A4qnfW3Y7nJ3WgU2IiErgliO1YR5/ShCHlB9dZbb3H6dAoqVSoazRWMRjVyuTtKZSl8fGqjUFjubEatVsuXX37JnDlz+Oqrr+jWrZv4i1TELFiwgHHjxrF+/XqioqLsXY5L2bMDPIXIAAAgAElEQVRnD+PGj2Nn/Z0YvA0269fX3ZdVb6yiVVVx5nFR5hKhOH36dNLS0pg3b55N+z127Bhdu3alTJkyLFy4kAqmXcUhOCGj0cjIkSP56aef2LJli9Xu6Svqtp3bRtsVbck12PYmjueDnmdn15027VNwLE73TvFx7LUfLCwsjIMHD9K4cWPq16/Pd999J941ujC1Ws3bb7/Nvn372LdvnwhEK5q8b7LNAxEg8VoiF+5fsHm/guNw+pGiTqcjICCAc+fOERAQYLc6jh8/Trdu3QgICGDRokVUrFjRbrUIlpeenk7btm2pWLEiS5YscbjbIFyJRq/B70s/dEYT9yN//o9/1wORgAmzoR4KDya1nMTARgNN61twGU4/UkxOTqZSpUp2DUTIu6X8wIEDxMbGEh4ezqJFi8So0UWkpaURFRXF888/z7Jly0QgWtmJ2yfwcjNjDcDov3x8ArgBNU1rQmPQsOvSLtP7FlyG04eiIx2lpVQqGT16NPHx8cyfP5+XXnqJS5cu2bssoRD27t1LkyZNGDZsGF988QVyudP/lXF4ydeTTR8l/tNpwAcw47KbpGtJhetbcGpO/zd8+/btNG/e3N5l/E3t2rU5cOAATZs2JSIigoULF4pRoxNavXo17du3Z+nSpfTq1cve5RQZp+6cIldfyPeJR4Ew8nZlmehG9o3C9S04NacOxdzcXA4ePEhsbKy9S/kXpVLJqFGj2LFjB4sWLeLFF18Uo0YnIUkSkydPZsiQIWzbto2XXnrJ3iUVKVmarMI1kAFcAuqa97gkSegMlj1fWXAeTh2K+/bto06dOvj5+dm7lCcKDQ1l//79tGjRgvDwcObPny9GjQ5Mr9fTt29fli1bxv79+8UZmHbgJi/kqUDHgIrAfx9W9VgSEgq5ONmmqHLqUIyPj3e4qdPHcXNzY8SIESQkJLB48WJatmzJxYsX7V2W8A9ZWVm0adOGCxcusHv3bnFakZ2U8imFzJx5z0eOkTd1aiYPhQdymVN/aRQKwal/5x1pkU1B1KxZk3379vHiiy8SERHBvHnzMBqtd6ajUHDXrl0jNjaW8uXLs2nTJvz9/e1dUpFVv2x9fN19zXv4MpAF1DK//5CSIeY/LDg9pw3FzMxMTp486XRHbLm5uTF8+HASEhKIi4ujZcuWXLggNgvb0/Hjx4mKiuKtt95i4cKFKJVKe5dUJKnVanbu3MmulbvIyc0xr5FjQA2gYHcEPFZMxRjzHxacntOGYkJCAg0bNnTaPWM1a9Zk7969vPLKK0RGRjJ37lwxarSD33//nZYtWzJ58mRGjBghzrC1Ib1eT2JiIl9++SUvvPACpUqVYsSIEfjqffF0N/Pv9WtAB/Nr8nP3IzbI8RbuCbbjtCfaDBo0iFKlSjFq1Ch7l1Jop0+fplu3bnh5efHdd99RuXJle5dUJCxevJhRo0axZs0amjRpYu9yXJ7RaOTkyZPEx8cTHx9PQkICQUFBNG/enObNmxMbG0uxYsUA+OT3T5h9cDZaw9MvILY0X6Uvt4fexktpuQsEBOfitKEYFhbGggULaNSokb1LsQiDwcCMGTOYNGkS48aNo2/fvmKjuJVIksT//vc/Vq5cyebNm6lWrZq9S3JJkiRx9uzZ/BDcsWMHxYsXzw/BZs2aUapUqcc+ezHjIjW+qYFar7ZZvUq5kg8jPmTWK7Ns1qfgeJwyFO/cuUPVqlVJT093uUtdU1NT6d69O0qlksWLF4tDpy1Mo9HQvXt3zp8/z8aNG5/4RVkwz9WrV/NDMD4+HqPRSIsWLfJD0JQzgVsta8W289vQG/VWrPj/yfQyFkcspmvbrjbpT3BMTjkU2bFjB02aNHG5QASoXr06u3fvpm3btjRs2JBZs2aJd40Wcu/ePV588UU0Gg3x8fEiEC0gPT2dNWvW0KdPH6pVq0a9evX45ZdfaNSoEX/88QdXrlxh6dKlvP/++yYfkv9tm2/xdLPNmgEfpQ/vBL7DxCETefXVVzlz5oxN+hUcj1OGorNtxTCVQqFg8ODB7Nu3j1WrVtG0aVPOnj1r77Kc2vnz52ncuDENGjRg9erVeHmJd0bmyMzM5JdffmHw4MHUrVuXKlWq8P333xMSEsLq1au5desWq1ev5sMPPyQkJKRQC5fK+ZVjzitz8FH6WPBX8G9ymZyg4kEs7bWUlJQUmjZtSnR0NIMHDyYjI8OqfQuOx2lD0Rk27RdWSEgICQkJtG/fnkaNGvH111+LUaMZEhMTiYmJoX///kyZMkW8qzVBbm4u27dvZ/To0URFRVG+fHlmzpxJQEAA8+fPJz09nU2bNjFo0CDCwsIs/t+2S1gX2ldvj7fS26Lt/lUxj2JsfHsjbnI3PDw8+OSTT0hJSSE7O5vq1aszf/58DAaD1foXHIvTvVO8cuUK4eHh3Lx5s0h9cUtLS6Nbt27IZDIWL15M1apV7V2SU9iwYQMffPABS5Ys4dVXX7V3OQ5Pp9ORlJSU/04wKSmJOnXq5C+OiYqKsvk2KIPRwNvr3mZL2hZUOpXF2pUho7hncXZ13UXtZ2s/9nOOHDnCwIEDuX//PjNnziwS34wXdU4XikuXLmXLli2sWrXK3qXYnMFgYM6cOUyYMIExY8bQr18/FApxRuPjSJLEzJkzmTZtGj///DPh4eH2LskhGY1Gjh07lh+Ce/bsoXLlyvmLY5o0aeIQZwsbJSNDtw1lXtK8wt+gAXgrvSnnV45f3/2VKiWqPPVzJUli3bp1DB06lLp16zJ16lSxAM6FOV0odunShejoaHr37m3vUuwmLS2N7t27I0kSixcvJiREHEv1VwaDgUGDBhEfH8+WLVtMXuDhyiRJ4s8//2T79u3Ex8ezc+dOAgICaN68OS1atOD555+3+4XdT7P/yn7eXPMm99T3zBo1KmQK3BXuDGo0iE+bfoq7wr3Az6rVaqZPn8706dPp2bMno0aNEscBuiCnCkVJkqhQoQI7d+6kSpWnf3fn6oxGI3PmzOGzzz5j1KhRDBgwQIwagZycHN555x1UKhXr1q3L3wxelF2+fDk/BOPj41EoFLRo0YIWLVrQrFkzypcvb+8STZKry+Xbw98yZd8U7qvvk6PNQeLpX8a8ld4YJSMda3ZkePRwapU2/3DU69evM2rUKH7//XcmTJhA165dxd89F+JUofjnn3/SsmVLLl26JI7jeujcuXN0794dnU7HkiVLivRG9Js3b/Lqq69Su3ZtFixYgLt7wUcBruT27dvs2LEjPwgzMzPzR4LNmzencuXKLvH3R5Ikdl7cyfrU9ey5vIfU9FQkKe/aJ0mS0Bl1lPEpQ2T5SF6o/ALv1H6H4p7FLdZ/UlISAwcOJDc3l6+//lqciuQinCoU582bR2JiInFxcfYuxaEYjUbmzp3LuHHjGDFiBIMGDSpy37mmpKTQunVrevTowZgxY1zii35BZWRkkJCQkB+CV69eJTY2Nj8Ea9WqVST+exglI3dVd1Hr1SgVSvw9/K26ahXygnnlypUMHz6cqKgoJk+eTFBQkFX7FKzLqUKxY8eOtGnThs6dO9u7FId0/vx5unfvjkajYcmSJVSvXt3eJdlEfHw8b7/9NtOmTSsSfzZUKhV79+4lPj6e7du3c/r0aaKiovJHg/Xq1XPJgy0cmUqlYsqUKcyaNYu+ffsyfPhwfH3NvP5KsC/JSRgMBqlkyZLS1atX7V2KQzMYDNKcOXOkkiVLSpMnT5b0er29S7KqpUuXSqVLl5bi4+PtXYrVaDQaaffu3dL48eOl2NhYycfHR4qJiZHGjh0r7dy5U1Kr1fYuUXjo8uXLUqdOnaTy5ctL33//vWQwGOxdkmAipxkpHj16lLfffpvU1FR7l+IUzp8/T48ePcjNzWXJkiXUqFHD3iVZlCRJfPbZZ8TFxbF582Zq1qxp75IsxmAwcPTo0fyR4L59+wgJCcnfKxgTEyNGIQ5u//79DBgwAJlMxtdff+0yFxcUBU4TitOnT+fs2bPMnTvX3qU4DaPRyIIFCxg7dixDhw5l8ODBLjGtptVq+eCDD0hJSWHTpk2UKVPG3iUViiRJnD59Oj8Ed+3aRdmyZfNDsGnTpjzzzDP2LlMwkdFo5Mcff2TUqFE0bdqUSZMmERgYaO+yhP/gNKHYunVrunXrxhtvvGHvUpzOxYsX6dGjB9nZ2SxZssSpR1UZGRm8/vrr+Pn5sWzZMnx8rHsuprVcuHAhPwTj4+Px9vb+25VKZcuWtXeJgoVkZ2czadIk5s2bx4ABA/jkk0/w9rbuAiDBfE4RijqdjoCAAM6fP0/JkiXtXY5TkiSJhQsXMmbMGIYMGcInn3zidKPGS5cu0apVK1q2bMn06dOdaoXtjRs32LFjR34QqtXq/BBs3rw5lSpVsneJgpVduHCB4cOHk5iYyFdffcVbb71VJFYFOxunCMX9+/fTt29fjhw5Yu9SnN7Fixfp2bMnDx48IC4ujlq1zN/EbEvJycm0bduWYcOGMWDAAHuX85/u3bvHrl278kPw5s2bNG3aND8Ea9SoIb4gFlEJCQkMHDgQb29vZs6cSUREhL1LEv7CoUJRkiQMhmyMRjUymRsKhS9yuZKJEyeSkZHB1KlT7V2iS5AkiUWLFjF69GgGDRrEsGHDLDJq1Ouzyc4+QlbWIbKzj2AwZAMylMqS+PlF4ucXjo9PLeRy0zbVb9q0iR49erBw4ULatWtX6DqtITs7mz179uSHYFpaGtHR0fkhWLduXaca2QrWZTAYiIuLY8yYMbz88st88cUXYsrcQdg1FCVJIjNzP+npP5GRkUBOzkkkSYtMpkCSjIARD48gDh7Mplq1TrRsOQGFwjnfITmiS5cu0atXL+7du0dcXByhoaEmtyFJBu7d+43LlyeTmbkPudwLo1GDJGn+9nlyuTcymQKjUUupUm9QocIQ/Pzq/Wf733zzDZ9//jk//fQTDRo0MLk+a9FoNBw4cCA/BI8ePUpERER+CDZo0KDInqgjFFxmZiYTJ05k8eLFDBkyhEGDBtn8FhLh7+wSikajhhs34rhyZTJa7S2Mxlzg6fcEyuU+gMSzz3ahYsVP8PISp9RbgiRJfPvtt4waNYqBAwcybNgwlEplgZ69e3cLqandMRpzHo4KC0qBXO6Bt3d1atT4ER+ff28XMRqNDB06lC1btrBlyxa7v3PT6/UcPnw4//zQ/fv3U6NGjfxTY6Kjo8XiCcFsZ8+e5ZNPPuH48eNMmTKFDh06iOl1O7F5KGZmJpGS0hGdLh2jMceMFtyQy5UEBY2lYsWhyGRiSsoSLl++TK9evUhPTycuLo7atR9/vxyATpfBn39+yN27mzAaC3O/nQy53JOgoDFUrDg8//dSpVLRuXNn7t69y4YNG+yyHUGSJE6ePJkfggkJCQQGBuafGhMbG0vx4pY7R1MQALZv387AgQMJCAhgxowZ1K1b194lFTk2C0VJMnL+/CiuXZv1cGRYOHK5D15elald+xc8PcXVQJYgPbyKasSIEQwYMIDhw4f/a9SoVl/i8OFodLr0f02Rmksu98bfvyG1a2/m7t0s2rRpQ9WqVfn222/x8PCwSB//RZIkzp07lx+CO3bswM/PLz8EmzZtyrPPPmuTWoSiTa/Xs2jRIsaNG0fbtm2ZOHEipUuXtndZRYZNQlGSDJw69a4FRhb/pECpfIZ69fbh7S1uoreUK1eu8MEHH3Dr1i3i4uKoU6cOAGr1ZQ4dikCnuwcYLNqnXO6JQlGd7t0f8Oab7/LZZ59Zffro2rVr+SEYHx+PXq/PD8FmzZqJg50Fu7p//z6fffYZP/zwAyNHjqRfv37iPbUNWD0UJUkiNbUbd+6ssXAgPiJDqQwgPPwQnp4VrNB+0SRJEnFxcQwbNox+/foxbNhAjhypg0ZzFUsH4iMaDeh04bz6arJV2r97927+XsH4+Hju3LlDs2bN8oMwJCREvMcRHE5qaipDhgwhLS2NadOm8eqrr4o/p1Zk9VC8cWMpaWkfmfn+sKAU+PrWJTz8IDKZ3Ir9FD1Xr17lgw8+oFGjJGJjswDLTJk+iVzuTY0ayyhVqvBbL7KyskhISMgPwfPnzxMTE5MfgnXq1EEuF39eBOewdetWBg0aRIUKFZgxY4bT7DF2NlYNRY3mOgcPVjNxZaJ55HIfKlWaSIUKA63eV1GTkbGXw4ebI5drbdKfm1txGjY8i1Jp2ulFarWaffv25Yfg8ePHadCgQX4IRkREFHhlrSA4Ip1Ox7x585g4cSJvvvkm48ePF6d8WZhVQ/HYsVe4f/8PQG+tLv5GLvemQYMzeHqKQ3ctKTk5nOzswzbrTybzIDBwAMHBXz318/R6PUlJSfkhePDgQUJDQ/P3CjZu3BgvLy8bVS0ItnP37l0+/fRTVq9ezZgxY+jTp4/4hs9CrBaKubnnSUqqhdGotkbzj5X3xXQgwcGTbNanq8vJSeHQoUiLrBg2hUJRjOjo2387/cZoNHLixIn8Q7R3795NpUqV8kMwNjYWf39/m9YpCPaUkpLCoEGDuHr1KtOnT+fll1+2d0lOz2qhmJY2iOvX5yJJtplye+RxX0wF86Wm9uTmzTistbjmSRQKP0JCFvLgQf38ENyxYwclS5b825VKpUqVsmldguBoJEnil19+YfDgwYSEhDB9+nSqVatm77KcllVCUZKM7NlTHIMhy6zn4+Nh6VK4fRtKlIDhw+HhroD/pFD4Ub369xZZqCHA3r2l0enumPxcZiZMmQLJyVCsGPTsCS1bmtbGwYOezJ4dkH9qTLNmzahQQawwFoTH0Wq1zJ49m0mTJvHee+8xduxYcQ+nGayy9C4399zDs0tNl5wMCxfmBeHmzTBzJphyTq7BkM2DB7vN6lv4O53uLnr9A7Oe/fprcHOD9eth9Oi838cLF0xrIyqqOJcvXyYuLo4uXbqIQBSEp3B3d2fIkCGkpKSgUqmoXr068+bNQ6+3zZoOV2GVUMzKOmT21oi4OOjcGWrWBLkcSpXK+yg4iYyMBLP6Fv4uK+swcrnphxPn5kJCAnTvDl5eULs2NG4M27aZ1o7RmG6lva2C4LpKly7NggUL+O2331i1ahX16tVj+/bt9i7LaVgtFM3ZhmEwwJkz8OABvPsudOyYN+LQmLg1TqU6bXLfwr+p1ReRJNO/y7x6FRQK+OvALjgYLl40rR253Au1+orJ/QuCAHXr1mXHjh2MGzeOXr160a5dO86ePWvvshyeVa5e1+luA6a/qrx/H/R62LULZs3Km34bPRp++CHvnVRBGQwqfvjhByRJyv8wGo1P/feCfI412rBXvwVpIyLiCi++mIupK71zc+GfF0b4+IDK5EGfzKarlwXB1chkMl5//XVat27NjBkzaNSoEd27d2fMmDFipfYTWCUUzRldADw6+7l9e3i0H7VjR/jxR9NCUSaT+O23rchkcmQyGXJ53j8fffzz3wvyOQVtQ6FQmPyMJfq1ThubgLmYeoqNl9e/A1Cl+ndQ/jcJuVzsvRKEwvL09GTkyJF07dqVUaNGUa1aNSZMmEC3bt3E5df/YJVQNPciYD+/vPeHfz3Wz5wj/mQyN378cZlZNQj/7/btdM6c+Q6DwbRQDAzMmwq/ejXv/wOcPQvPPWda/5Kkw81NnNYhCJZStmxZlixZQnJyMgMHDmTu3LnMnDmT2NhYe5fmMKzyTtHHp5ZZCzQAXn4ZNmzIm0rNyoK1ayEqyrQ23N3LmdW38He+vvWQJNP3J3p5QZMmsGRJ3lTqiROwbx+88IJp7chkHnh4lDG5f0EQni4iIoLdu3czbNgwOnfuzJtvvslFU1/6uyirjBR9fcORydwB098HdemSt9Cmc2dwd4emTeG990xrw8+vgcn9Cv/m5RUMmLe1ZuBAmDwZOnQAf/+8f69UybQ2fH3DzOpbEIT/JpPJePvtt2nTpg1Tp04lPDycPn36MGLECHx9fS3e3/3c+xy+cZjk68mcTj9Nri4XpUJJGd8yRJaLJLxcOMHPBNv9BhCrbN7X67PZu/cZs98tFoZM5klw8GQCA/vZvG9XdOTI8zx4YPstLjKZJ889N5agoJE271sQiqKrV68yYsQIduzYwRdffEHnzp0LfYuM1qBl3al1fLX3K07dOYWX0gu1Xo3W8P8nncmQ4evui0EyoJQr+TDiQz6K/IgKxeyzL9lqx7wdPhxNZuY+azT9VDKZJw0anMbL6zmb9+2K0tM3cvr0e2afTmQumcyTRo0uiOlTQbCxAwcOMGDAACRJ4uuvvybK1PdX5B09t+jwIoZuG4okSWRpC/71w0PhgUwm45Uqr7Dg1QWU8rHtUY5Wu0yuYsXhKBR+1mr+iYoVixKBaEElS7Z+OBVuSzJKlHhRBKIg2EGjRo3Yv38//fr1o2PHjrz77rtcuVLw/cKXH1ymyZImDP5tMJmaTJMCEUBj0KDWq9mctpmqs6uy7tQ6U38JhWK1UCxZsjVyuYe1mn8sudyXChWG2bRPVyeTKQgKGgOYt3DKHHK5J0FB/7NZf4Ig/J1cLqdz586kpqZSuXJl6taty/jx41H9x2bjxKuJ1J5Xm8RrieToCnexvNag5YHmAV1+6sKArXkjV1uwWijKZAqqVp2LXG7y5jQz+3PDz68uJUq8ZJP+igqVSsXXX1/m4kU9kmT9F+ByuTdly36Av3+E1fsSBOHpfH19mTBhAocPH+bUqVNUr16dFStWPDagEq8m0uL7FmRqMtEbLbeeRKVT8e3hb+m7ua9NgtFqoQhQunRHihdvZpPpN5nMgxo1ltt95ZIr2bVrF2FhYVy/fpOWLXejUFh7tChDqSxJ5cpfWrkfQRBMERQUxKpVq1i2bBlTp04lJiaGpKSk/J+/8uAKL/74YqFHh0+i0qn44fgPTN0/1Srt/5VVQxGgevUlD98tWi+s5HJvQkK+wdNT3KJgCZmZmfTp04d3332XadOmsXz5cgIDG1G9+lLkcuvdZK9Q+FOnzm8oFNbrQxAE8zVp0oSDBw/So0cP2rRpQ9euXbl27Rqd1ndCpbPu4f05uhzG7RhHanqqVfuxeii6u5eiXr3dKBT+WCMY5XJvKlYcQZky71u87aLo119/JTQ0FL1ez8mTJ2nTpk3+z5Uu3ZGQkAVWCEYZCkUx6tbdiY9PDQu3LQiCJSkUCrp3786ZM2d49tlnCXk7hMTLiRadMn0StUHNm2vexGC03qXnVtuS8U85OakcPdoEvT4LSTLx2osnkMu9eO658VSsONQi7RVld+/eZdCgQezZs4dFixbRokWLJ37uvXu/c+rU2xgMqkL/Xsrl3nh6Pkdo6E94e1ctVFuCINiW1qAl4KsAsnS227Ll6+7LkrZLeKPmG1Zp3+ojxUd8fKrToMEZSpZ8tdCLb+RyL9zdy1Knzm8iEC1g7dq11K5dmxIlSnDixImnBiJAiRIv0rDhOQIC2jz8vTT9j5FM5o5c7kXFiiOJiDgmAlEQnNBPqT8hyWyzKvSRbG02k/dOtlr7Nhsp/lV6+kbS0j5Cr88w6d5FudwHMFK2bE8qV56EQmGbla2u6ubNm3z00UecOnWK7777jsaNG5vcRmZmIleuTOXu3V8ABUbj0160y/MPiy9bthfly/cTe0oFwYmFLwjn8M3DNu/Xy82LQx8cokYpy79usUsoQt6JBxkZO7l8eTIZGTvyrwj6a0jKZB4YDHIkSY2PTyUCAwdRpkwX3NzEPWCFIUkS33//PUOHDqVXr17873//w9OzcCtLtdp07t79hQcP9pKZuRe1+iJGoxZJAr1eTokSdSlWLJZixaIpWbKVzfewCoJgWZmaTAImB6Az6kx/+A6wGbgBeAMvAibkm7vCnS+af8GQxkNM7/s/2C0U/0qSDKhUZ8jKSkatvojBkI1M5oFSWRK1OpAWLT7k0qU7YruFBVy+fJnevXtz8+ZNFi9eTL169aza35o1a1i1ahVr1661aj+CINjWrou7aLOyDZmaTNMeNADfABFAI+AisALoDQQUvJnXQl5j4zsbTeu7AKxyS4apZDIFPj418fGp+YTPGMzZs2epWlW8dzKX0Whk/vz5fPrppwwaNIihQ4eiVFr/At+AgADS09Ot3o8gCLaVfD0Zjd6MhXbpQBYQRd6GhMpABeA40Ny0/q3BIULxv0RHR7Nv3z4RimZKS0ujZ8+e6HQ6EhISqFHDdtseRCgKgms6c/cMGhMvIH+q2yZ+eo6JDxSQzVafFkbjxo3Zt8/2N244O71ez5QpU4iKiqJDhw7s3r3bpoEIULJkSe7evWvTPgVBsD6zN+sHAD7AXvKmUs+SN4Vq4qtJg2TAKJl33+vTOMVIsXHjxixYsMDeZTiVEydO0L17d/z9/Tl48CCVK1e2Sx2PQlGSJPFOWBBciFJu5usXBfA28Ct5wVgOqIXJaSR7+D9Lc4qRYlhYGJcuXSIjI8PepTg8rVbLuHHjaN68Ob179+aPP/6wWyACeHh44OnpSWamiS/jBUFwaGX9yiI3N0LKAN2A4UBn4D5Q3rQmfN19rfKNtlOEopubG5GRkRw4cMDepTi0pKQkwsPDOXToEEeOHKFnz54OMToTU6iC4HoiykXg6+Fr3sM3yZsu1ZI3WswG6prWRM1ST1qYWThOEYqQN4W6d+9ee5fhkFQqFUOHDuW1115j1KhRbNy4kcDAQHuXlU8sthEE1xNeNhydwYw9ipC30nQaMAW4QN5o0YTpUzlyYoNizev7PzjFO0XIW4E6dar1rw1xNgkJCfTo0YOIiAiOHz9O6dKl7V3Sv4hQFATXU7FYRbyUXuTqc01/+MWHH2bycfeh6XNNzW/gKZxmpNioUSOSkpLQ661/ErszyMzMpG/fvnTq1Ilp06axYsUKhwxEENOnguCKZDIZH0V+hIfC9qdTuSvceTG4EKn6FE4Tis888wwVKlTg+PHj9i7F7rZu3Urt2rXRarX/ut7JEYmRoiC4pg8jPrR5n55unvRv2I2pOE8AABBxSURBVB83uXUmOp0mFOH/N/EXVffu3eP999+nT58+fPfdd3z77bcUL17c3mX9JxGKguCayvmVo131djYdLbrJ3awaxk4VikV5E/+6desIDQ2lWLFinDhxgpYtW9q7pAIT06eC4Lrmtp6Ll9LSF48/no/Sh9mvzKa0j/VeFTldKBa1Fag3b97kjTfeYPTo0axZs4ZZs2bh62vmMmg7ESNFQXBdJbxK8H277/FWWvcqP6VcSYPyDXg/7H2r9uNUoVi1alVUKhVXr161dylW9+h6pzp16hASEsLRo0eJjo62d1lmEaEoCK7ttWqv0a9BP6sFo5vcjXJ+5VjdcbXV9147zZYMyFvt1LhxY/bv30/Hjh3tXY7VPLre6caNG2zdupX69evbu6RCEdOnguD6vmzxJVqDlgWHFph/LupjKOVKyvmVY1+PfQR4m3C3lJmcaqQIrj2FajQamTdvHuHh4cTExJCUlOT0gQhipCgIRYFMJmP6S9OZ1HIS3m7eyGWFjxcfpQ8xFWNI/iCZcn7lLFDlf3OIS4ZNsWfPHgYPHszBgwftXYpFPbreSaPRsHjxYmrWtM4RRvag0Wjw8/NDo9E4xLFzgiBY19l7Z3lr7Vv8efdPsrXZJj/v5eaFQq7gm1bf0LlOZ5t+3XC6kWJ4eDgpKSmoVJYbntuTXq9n6tSpREVF0a5dO/bu3etSgQh5h4J7eHiQlZVl71IEQbCBKiWqkNQriWUdlhFdIRpPN088FZ5PfUYuk+Pn7keAdwCjm4zmfP/zdAnrYvNvpJ3qnSKAl5cXtWvXJjk5mdhY65x9ZysnT56ke/fu+Pr6kpiYSHBwsL1LsppHU6j+/v72LkUQBBuQy+S0qdaGNtXacPbeWbakbSHhUgIHrx3kZvZN9EY9CrkCH6UPoaVDeT7oeWKDYmlZuSUKucJudTtdKML/v1d01lDUarV8+eWXzJkzhy+++MJhbrOwpkehaM9rrARBsI8qJarQv2F/+jfsb+9S/pNThmJ0dDRxcXH2LsMsSUlJ9OjRg4oVK3LkyBGHus3CmsQKVEEQnIHTvVMEiIqKYt++fTjTGqHc3FyGDRvGa6+9xogRI9i0aVORCUQQK1AFQXAOThmK5cqVw9/fnzNnzti7lAJJSEggLCyMy5cvc/z4cTp16uTy06X/JEJREARn4JShCM5xOHhWVhYfffQR77zzDpMnT2blypUOe72TtYnpU0EQnIHThqKjHw7+22+/ERoaSm5uLidPnqRdu3b2LsmuxEhREARn4JQLbSAvFGfPnm3vMv7l3r17DB48mJ07d7Jo0SJefNE6F2E6GxGKgiA4A6cdKdauXZvr16871JTc+vXrCQ0Nxd/fn5MnT4pA/AsxfSoIgjNw2pGiQqGgQYMGHDhwgNatW9u1llu3bvHxxx9z/PhxVq9eTUxMjF3rcURipCgIgjNw2pEi2P9wcEmS+OGHH6hTpw7BwcEcPXpUBOITiFAUBMEZOO1IEfJWoH7xxRd26fvKlSv07t2ba9eusWXLFsLDw/+vvbuNjau68zj+u/faM5PxOB4cJ87YMUKWE7wJIjThKRJNSF7QLUFVUSq6u2lRi4OWEiroSiUS+6Kr7CKW7gNB2wVSdanKbqtWaiTERi2ogoXtSpCEZB0gD9BQkjgb7MRBsT0zHj/MOfvCU4hKCJ47T8fM9yPxIpHP+Z+gSL/8z73n3JqsY674w/aptbbujqMAmDvmdKd4ww03aP/+/ZqamqpaTWOMnnrqKa1atUpr1qzRvn37CMRZ4FJwAHPBnO4UE4mI1q9v1759f61k8pQmJk7L2in5/jzF48s0f/4aNTevVjzeK68M3/Y6duyYtmzZolwup5dfflkrVqwow5+ifnApOADXzbnvKUpSJnNIAwOP6cyZnymXm1ZDg+T7H+8WgyAha62CoElLljygVOpuRSLFf7k5n89rx44deuSRR/TQQw/p/vvvVxDU7hb3ueraa6/Vk08+qeuuu67WSwGAi5pTneLExP/pyJE7NTr6qoyZlJRXJPLJP5/Pz3zc0piMTpz4W504sV0dHfepu/vv5PvRWdV866231NfXp3g8rtdee009PT1l+JPUJ162AeC6OfFM0Vqr999/Wnv29Gpk5L9lzLikfFFzGDMuY3I6ffoJ7d3bq9HR1y/585OTk9q+fbvWr1+vvr4+vfjiiwRiiQhFAK5zvlO01uidd76loaGfyphMyfMZk1Uud1z9/evU2/uMFi3a9LGfef3113XXXXepq6tLBw4cUFdXV8l1wQF+AO5zulO01urtt+/W0NB/lCUQL2RMVkePfl1nzuz68PfGx8e1bds2bdy4UQ8++KB2795NIJYRnSIA1zndKQ4M/IPOnPm5jMlWZH5jxnX06J2aN69b/f1p9fX16ZprrtEbb7yh9vb2itSsZ21tbTp48GCtlwEAn8jZUMxkjur48b8pPD+sHGOyeuWVm3XvvU16/PF/1e23317RevWM7VMArnMyFK3N6/DhO2TMRFXqRaMZPf/8N7V8OYFYSWyfAnCdk88Uz537lXK59ySZqtRrbMxreHinpqbOV6VevSIUAbjOyVA8efLvPzxjWD2+BgefrnLN+sL2KQDXOReK4+PvKp0+EHr8qVPSLbdIDz9c3DhjshoY+CfNwQt+5owFCxZoeHiY/8cAnOVcKH7wwW9UyrIef1zq7Q03dnr6vHK5E6Fr49JisRiXggNwmnOhODLy29BHMF56SWpqklatClfb8xqUTu8PNxizwhYqAJc5F4qjo3tCjctkpB//WNq6NXztfH4sdH3MDi/bAHCZc6E4NTUUatzTT0u33iotXFhKdats9nelTIBPQSgCcJlzoWhM8R8MPnZM2r9f+spXylG/spcF1Du2TwG4zLnD+57XIGuLO7Tf3y8NDUlf/erMr8fHJWOkEyekH/6wuPqz/aQUwqFTBOAy50KxoSGpycniLv++7TZpw4aPfv2LX0iDg9J3vlN8/Wh0SfGDMGuEIgCXObd92ty8uugxsZjU2vrRf/PmSZGIlEwWN08QJDR//o1F18fssX0KwGXOdYrJ5Dp98MELRW+hXugb3whfP0woY/boFAG4zLlOMZlcJ8+rVVb7isevrFHt+kAoAnCZc6GYSKxSNNpZ9bqeF1FHxz3yvKDqtesJ26cAXOZcKHqep8sv3ybfb6pyXV+dnSWc/Mes0CkCcJlzoShJixb9uYIgUbV6nhfVggVfUix2edVq1isuBQfgMidDMQjmafnyn8n351Wt3rJlT1alVr2LxWKKRCJKp6v9aTAA+HROhqIkXXbZBi1a9GcVD0bfj6u39ydqbGytaB18hC1UAK5yNhQlaenSJ5RIrJTnVeaWGd+Pq6vru2pr+1JF5sfFEYoAXOV0KAZBTFdf/Rs1N3+u7B2j78e1ZMkDuuKK75V1Xnw63kAF4CqnQ1GSGhoSWrnyv9Te/vWyBKPnNcj3m7R06Q/U3f2wPM8rwypRDDpFAK5yPhSlmY7xyit36uqrn1ckkpLvh3kz1ZPvN2n+/DW6/vqjSqW+WfZ1YnYIRQCucu6at0tJJtfqxhvf09mzv9TJk49qfPxdSfYSn3vyFARNMmZKra23qKvru2ppuYnusMbYPgXgqjkVitLMp53a2zervX2z0um3NDLyPxoZ+a1GR/doevq8rJ2W7zcqEulUS8vn1dKyRsnkBkWji2u9dBS0tbXpzTffrPUyAOBj5lwoXiiRuEqJxFXq7Lyn1ktBEdg+BeCqOfFMEZ8tbJ8CcBWhiKqjUwTgKkIRVUcoAnCVZ7mZGVWWy+XU0tKiXC7Hm8AAnEKniKqLxWJqbGzkUnAAziEUURNsoQJwEaGImuANVAAuIhRRE3SKAFxEKKImCEUALiIUURNsnwJwEaGImqBTBOAiQhE1QSgCcBGhiJpg+xSAiwhF1ASdIgAXEYqoCUIRgIsIRdQE26cAXDSnPzKMuWd6ekzp9P9qcvJVbd48qMOH/0KeF1EksljNzdequXm1YrEruCgcQE3wlQxUnDFTGh5+VgMD31c6fVC+H5cxOVk7ccFP+QqChKydlu9HlErdo87OrYrFltRs3QDqD6GIirHWanDwJ3r33b+StdPK58dmPdbzopKkBQs2atmynYpE2iq1TAD4EKGIipiYOK0jR76m0dG9MiYTeh7Pi8j356m399+0cOGmMq4QAD6OUETZjY0dUH//BuXzGUnTZZnT95uUSm1RT89jPG8EUDGEIspqJhDXKZ8v/weEfT+u9vY7tWzZEwQjgIrgSAbKZmLidKFDLH8gSpIxWQ0NPaOBgX+syPwAQCiiLKy1OnLka4Ut08oxJqvjx7+nTOZIResAqE+EIspiaOjfNTq6V+V6hngpxuR06NAdsjZf8VoA6guhiJIZM61jxx4o6S3T4ljlcu/p7NldVaoHoF4QiijZuXPPydrKd4gXMiajkye/X9WaAD77CEWU7OTJR4s6mF8u2exhZTKHq14XwGcXd5+iJPl8Run0gVBjBwelHTukQ4ekxkZp3TrpvvukIJjdeGvzGh7+TzU1LQ9VHwD+GJ0iSpJO98v346HG7tghJZPSrl3Sj34kHTwoPfvs7MdbO6mRkVdC1QaAiyEUUZKxsf2ydjLU2Pffl26+WYpEpNZW6frrpePHi60frksFgIshFFGSbPZtGZMLNXbTJumll6RcTjp7VtqzZyYYizE1dUZcygSgXHimiJLk89nQY1eulHbvljZulIyRvvAF6aabip/H2ml5XmPodQDAH9ApoiS+Hy6MjJG2bZPWrpV+/euZZ4ljY9LOncXOZOV5/NsOQHkQiihJY+NihflrNDYmDQ1JX/7yzDPFlhbpi1+c2UIthu/HuRwcQNkQiijJ/PmrFQSJose1tEiplPTcc1I+L6XT0gsvSN3dxc0Tj/9J0bUB4JMQiihJIrFa1k6FGrt9u7R370y3uHnzzPnErVuLmcFTMvn5ULUB4GJ4GIOSRKOdCoImGTNe9NienpmzimEFQULJ5M3hJwCAP0KniJJ4nqeOjq3y/VgNagdqbb216nUBfHYRiihZR8c9VT8r6HlRdXTcF/rtVwC4GEIRJYtGF2vhwk3yvGjVanpegzo7761aPQD1gVBEWSxd+gMFQbg7UIvl+03q6flnRaOpqtQDUD8IRZRFY+Nl6u19JvTl4LPXoETiGqVSd1e4DoB6RCiibNrablNn57crGIyBIpF2XXXVLg7sA6gIQhFl1d39iDo6/rICwdioSCSlVateVSTSXua5AWCGZ/nEACrg1Kl/0e9/v03GTEgyJc3l+01qbv6cVqz4JYEIoKIIRVRMNntMhw/fofHx3ymfTxc9fubsY6CenseUSm1hyxRAxRGKqChrjYaHn9PAwKNKpw/KWiNrJy4xwlcQNBWOXHxbHR3fUjS6uGrrBVDfCEVUTTb7js6d+5XOn39FY2P7NDU1JGunNROEccXjK5RMrlVLy1q1tv6pfJ9bCAFUF6GImrLWsi0KwBm8fYqaIhABuIRQBACggFAEAKCAUAQAoIBQBACggFAEAKCAUAQAoIBQBACggFAEAKCAUAQAoIBQBACggFAEAKCAUAQAoIBQBACggFAEAKCAUAQAoIBQBACg4P8BE44yWQnzR6QAAAAASUVORK5CYII=\n",
"image/png": "iVBORw0KGgoAAAANSUhEUgAAAcUAAAE1CAYAAACWU/udAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDIuMi4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvIxREBQAAIABJREFUeJzs3Xd4VNXWx/HvnCnJpJCE3kMNJCA1BEEpigUUpGuiIoKCvCqKBAtXLNer2BARFJEiqGACAakWQGlKCwQInQBKDSUQSG8zc94/gAhSzEymZWZ97pPnGsjea6mYX/Ype2tUVVURQgghBIqrGxBCCCHchYSiEEIIcZmEohBCCHGZhKIQQghxmYSiEEIIcZmEohBCCHGZhKIQQghxmYSiEEIIcZmEohBCCHGZhKIQQghxmYSiEEIIcZmEohBCCHGZhKIQQghxmc7VDQjhDlRVZffZ3WxJ3cL64+tJSk0ipygHVVXx1/vTslpL7qh1B21qtKF5leZoNBpXtyyEcACNHB0lvFl2YTbfJX/Hxxs+5mzOWQByinJu+LV+ej80aChvLM/L7V/mieZPEOQb5Mx2hRAOJqEovJKqqnyb/C3Dfx6ORbXcNAhvxl/vD8C4+8bxTOtnZOUohIeQUBRe53T2aR5d8CiJJxOtDsN/8tf707xqc+L7xlMrqJadOhRCuIqEovAqB88f5M6Zd5Kel47JYrLLnDpFRzlDOdYNWkeTyk3sMqcQwjUkFIXXOHLxCJFTI0nPS0fFvn/sNWgI8gli85DNhFUIs+vcQgjnkVAUXiHflE/4F+EcyziGRbU4pIYGDVUDqnJw+EH8Df4OqSGEcCx5T1F4hTGrxnA256zDAhFAReVi/kVGLh/psBpCCMeSlaLweFtTt9JxZkfyTHlOqWfUGVn++HI6hHZwSj0hhP3ISlF4vP/89h+nBSJAnimPV3991Wn1hBD2I6EoPNqJzBOsO7rO6XW3n97OwfMHnV5XCFE6EorCo03eMtm2gWnALOB94DNgn3XDzRYzExMn2lZbCOEyEorCoy1LWUaBucC6QWYgDggDXgV6AD8A50o+RZGliJ8P/mxdXSGEy0koCo9ltphJOZ9i/cBzQBbQjkv/hdQDagE7rZvmWMYx8k351tcXQriMhKLwWCnnU9Br9fab8Kx1X27UG9l1Zpf96gshHE5CUXisU9mn0Gq01g+sCPgD67l0KfUQcAQosm4aDRpOZ5+2vr4QwmXkPEXhsQrNhbYN1ALRwM9cCsbqQBOs/q9FRbX+fqYQwqUkFIXHMmgNtg+uCgy66vPpQAvrptCgwUfrY3sPQgink8unwmNVC6iGWTXbNvg0ly6XFnJptZiN1aGoolI1oKpt9YUQLiErReGxwiqEUWS28kbgFTuBbVy6pxgKDMDq/1ryivJoVqWZbfWFEC4hoSg8llbRElYhjF1nbXgC9L7LH6VQO6g2Pjq5fCpEWSKXT4VHe6jRQy65r6dX9HRr2M3pdYUQpSOhKDzasMhhLqmrVbS8EPWCS2oLIWwnoSg8Ws1yNekY2tHpdVtWbUnDCg2dXlcIUToSisLjvd/lfYw6o9PqGXVGPrr3I6fVE0LYj4Si8Hitq7fmuajn8NP7ObyWUWdkYPOB3Fn7TofXEkLYn0ZVVdXVTQjhaAWmAsK/COdoxlEsqsUhNTRoqBZYjZTnU/A3+DukhhDCsWSlKLyCj86HVQNXEeIbgqKx/x97DRqCfIJYPXC1BKIQZZiEovAadYLrsOnpTVT0q4hesd/pGTpFR4gxhPVPrSesQpjd5hVCOJ+EovAqDco3IHlYMp1CO+GvL/2Kzl/vz+01byd5WDIRlSLs0KEQwpXknqLwSqqq8l3ydwxaMAi9Xk+Bat1pFlcCdfz94xnSaggajcYRbQohnExWisIraTQaKpysQKMljZjQbQL1Qurhr/e/5erxyu+HBoXy4T0fkhqbytDWQyUQhfAgslIUXslisRAZGcmYMWPo06cPqqqyN20vW1K3sOH4BrambiWnMAcVFX+9Py2rteSOWncQWT2SZlWaSRAK4aEkFIVXWrBgAWPHjmXr1q0ScEKIYm4biqp66dRyRaOgV/TyjUvYjdlsplmzZowbN45u3WTTbiHE39zm6Ki/LvzFwv0LWXNkDVtTt3I6+zSKRkFFRdEo1A2uS/ta7elcpzN9wvtQzqecq1sWZVRcXBzBwcF07drV1a0IIdyMS1eKqqqy/PByPvjjAzaf3Fy8OryVAH0AZtVMdNNoRrUfJY/BC6sUFRURHh7OtGnTuOuuu1zdjhDCzbgsFE9mnmTAwgFsObmF7KJsq8frFB16Rc9zUc/x7l3vymGuokSmT59OXFwcv/32m6tbEUK4IZeEYvzueIYsHUK+KR+TxVSqufx0flQOqMzSmKU0rdzUTh0KT1RQUEDDhg2ZO3cu7dq1c3U7Qgg35PT3FD/d9CmDFw8muzC71IEIkGvK5cjFI7Sf0Z7Ek4l26FB4qmnTptGsWTMJRCHETTl1pThl6xRiV8SSW5TrkPkDDYH8Puh3mldt7pD5RdmVm5tLgwYNWLZsGa1atXJ1O0IIN+W0leL2U9sZuXykwwIRIKswi25zujm0hiibvvjiC9q3by+BKIS4JaesFAvNhUR8EcHhC4cdXar4kNcvu3/p8FqibMjMzKRBgwasWbOGiAh5WlkIcXNOWSm+t+49TmWfckYp8kx5fJP8DZtPbHZKPeH+PvvsM+6//34JRCHEv3L4SrHAVECljyuRVZjlyDLX0KDhwbAHWRqz1Gk1hXtKT08nLCyMTZs20aBBA1e3I4Rwcw5fKS7YtwAV5771oaKy8vBKTmU5Z3Uq3Ne4cePo3bu3BKIQokQcvs3b+I3jyS60/uV8FgB/AYVAAHAH0LrkwzVomLVjFqM7jLa+tvAIZ8+e5auvvmL79u2ubkUIUUY4dKVospjYdXaXbYM7ACOA/wAxwCogteTD8835rPhzhW21hUf44IMPeOyxx6hdu7arWxFClBEOXSnuP7cfg9ZAobnQ+sGVr/przeWPdKB6yadIPp1sfV3hEU6cOMGsWbPYs2ePq1sRQpQhDg3FHad3lG6CZcAOwARUBRpaNzy3KJe0nDQq+VcqXR+izHnvvfd4+umnqVatmqtbEUKUIQ4NxQt5FygyF9k+QXfgAeA4cASru9Vr9VzMvyih6GX++usv5s2bR0pKiqtbEUKUMQ6/p1jqJ08VIBTIBLZYN1SDhiJLKUJZlEnvvPMOzz//PBUqVHB1K0KIMsahK0VfnS9ajdY+k1mAC1YOUS346nztU1+UCQcOHGDZsmUcPHjQ1a0IIcogh64U64XUQ6/VWz8wG9gFFHApDA8Bu4G61k1TaC6kRmAN6+uLMuutt95i5MiRBAcHu7oVIUQZ5NCVYuvqrck35Vs/UANs5dKDNioQDHQFGls3TWhwqBw+7EV27tzJmjVrmDFjhqtbEUKUUQ4NxYp+FQk0BHI+77x1A/2BQaWv37Jiy9JPIsqMN998k9deew1/f39XtyKEKKMcvs1b3/C+6DQO3zjnOjqzjh8/+pG77rqLyZMnc/r0aaf3IJxny5YtbN26lWHDhrm6FSFEGebwUBxx+wjb7iuWUkhgCGf+OMOLL77Ihg0bCA8Pp1OnTnz++eecOiV7onqaMWPGMGbMGHx95cEqIYTtHB6K4ZXCaVK5iaPLXMOoMzLi9hEE+AfQq1cvZs+ezalTp4iNjWXz5s1ERETQsWNHJk2aRGqqFXvHCbe0bt06Dh48yODBg13dihCijHPKIcNbTm6h06xO5JnyHF0KgCr+VTj0wiECDAE3/P2CggJWrFjB/PnzWbp0KREREfTv359+/fpRo4Y8rVqWqKpKp06deOqppxg4cKCr2xFClHFOCUWA2BWxTNkyhVxTrkPrGHVGfnz0R+6qe1eJvr6goIBff/2VhIQElixZQnh4eHFA1qxZ06G9itJbuXIlw4cPZ/fu3eh0zr93LYTwLE4LxQJTAU2/bMqRi0cwWUwOqeGn92NAswFM6T7FpvGFhYXXBGSjRo3o378/ffv2lZMW3JCqqrRt25bY2FgeeeQRV7cjhPAATgtFgFNZp4icGsnZ3LN2D0ajzsjdde9mcfRitErpd9EpLCxk1apVJCQksHjxYho0aFC8ggwNDbVDx6K0lixZwhtvvMH27dtRFIffHhdCeAGnhiJAalYqd359J6ezT9vtHqO/3p9uDbsR1zcOnWL/S2hFRUXFAblo0SLq169fHJB16tSxez3x7ywWCy1btuR///sfDz30kKvbEUJ4CKeHIkBOYQ4jV4zku+TvShWMOkWHj9aHCV0n8FTLp9BoNHbs8saKiopYvXo18+fPZ+HChdSpU4f+/fvTv39/6ta1ch86YbN58+Yxbtw4Nm/e7JR/70II7+CSULxi3dF1DFo8iDPZZ8gtyi3xiRoGrQFFo9CxdkemPzSdWkG1HNzpjZlMJtasWUNCQgILFy6kdu3axQFZr149l/TkDUwmE02bNmXixIncd999rm5HCOFBXBqKcOlhiQ3HN/Dxho/56eBPxXuVZhdmF3+NggJFoPfR46P3YUirITzX5jnqhrjPysxkMrF27Vrmz5/PDz/8QI0aNYoDskGDBq5uz6N88803zJgxg7Vr18oqUQhhVy4PxauZLCb2pu0lKTWJvWl7ySrMQqfoCPENYdOiTXRs2JExz49x+2+EZrOZdevWkZCQwA8//EC1atWKA7Jhw4aubq9MKywspHHjxsyaNYuOHTu6uh0hhIdxq1C8lcmTJ7Njxw6mTp3q6lasYjab+f3334sDskqVKsUP6TRq1MjV7ZU5X331FT/88APLly93dStCCA9UZkJxw4YNjBgxgsTERFe3YjOz2cz69etJSEhgwYIFVKxYsXgF2bixledieaH8/HwaNmzIggULiIqKcnU7QggPVGZCMSsri6pVq5KRkeERO5dYLJZrArJ8+fLFARkeHu7q9tzShAkTWL16NYsXL3Z1K0IID1VmQhEgLCyMRYsWERER4epW7MpisbBhwwbmz5/P/PnzCQoKKg7IJk2cu5m6u8rOzqZBgwasWLGCZs2aubodIYSHKlPbgLRo0YIdO3a4ug27UxSFO++8kwkTJnDs2DGmTZtGRkYGXbt2JSIigrfeeovdu3dThn5+sbvPP/+czp07SyAKIRyqTK0Ux44dy8WLF/noo49c3YpTWCwWNm/eTEJCAvPnz8ff359+/frRv39/brvtNrd/CtdeMjIyaNCgAb///rvcexVCOJSsFN2Yoii0a9eO8ePHc/ToUWbNmkVeXh49evSgcePGjBkzhuTkZI9fQY4fP54HH3xQAlEI4XBlaqWYmppK8+bNOXv2rNeskm5EVVW2bNlSvILU6/XFK8gWLVq49T+b1KxUNp3YxOYTm1l/fD3peemYLCZ8db6EVQijY2hHIqtHElk9EoPWwLlz52jUqBFbt26VbfSEEA5XpkJRVVWqVKnC9u3b5TDgy1RVJSkpiYSEBBISEtBqtcUB2bJlS7cISItqYfmh5Xy0/iM2ndyEQWsguyAbC5brvtZX54tBa0CDhmGRw8j4NQMy4Msvv3RB50IIb1OmQhHgvvvu48UXX+TBBx90dStuR1VVtm3bVhyQQPFTrK1atXJJQCalJvHw/Ic5m3P2mq37SsKgNVBYUMjA2wYyufdk/PR+DupSCCEuKXOh+MorrxAUFMTrr7/u6lbcmqqqbN++nfnz55OQkIDZbC5eQUZGRjo8IAvNhYxZNYbPEz8v9RFhRp2R8sbyJPRPoF2tdnbqUAghrlfmQvH7779n4cKFxSsh8e9UVSU5Obl4BVlUVFQckG3atLF7QOYW5dJtdje2ntpKblGu3eb10/sxq+cs+jfpb7c5hRDiamUuFPfu3UvPnj05ePCgq1spk1RVZefOncUBmZ+fXxyQbdu2LXVAFpgK6PJtF5JOJZFvyrdT138z6ox83/d7ejXuZfe5hRCizIWiyWQiKCiI06dPExgY6Op2yjRVVdm9e3dxQObk5FwTkIpi/Rs7gxYNYu6euaW+ZHorfno/Nj+9maaVmzqshhDCO5W5UASIiori9Q9eJ7heMHmmPLQaLeV8ytG0clP8Df6ubq9MUlWVPXv2FAdkVlYWffv2pX///rRr165EAbny8Ep6ze1l10umN6JBQ6OKjdg5bCd6rd6htYQQ3qXMhGJeUR7xu+OZlTyLjX9tRFVU/H3+DkAVldyiXKoFVKND7Q482+ZZ2tdq7xavJJRFe/fuLQ7IixcvFgdk+/btbxiQ2YXZ1P2sLudyzzmlPz+9H6/e8SpvdnrTKfWEEN7B7UPxfO553l77NjO3z0Sj0ZTosX4NGvz0flT2r8wbHd9gYIuBKJoytXmPW9m3b19xQKanp18TkFqtFoBJmyfx2m+vOXyVeDV/vT9nXz4rr2oIIezGrUNx8f7FPLnoSXJNuRSaC22aw1/vT9PKTYnvF0+d4Dr2bdAL7d+/v/g1j7S0NPr06UO/fv0YsH0AJzJPOLUXf70/n3X9jKdaPeXUukIIz+WWoVhkLuLJRU+y6MAiu6w8tBotPjofvun1Df0i+tmhQwGQkpJCQkICM1fN5M/b/0Q1WPFH6b1/fG4C2gAPWNdDWIUwDjx/wLpBQghxE24XioXmQrp/350/jv1h9ycYjTojkx+YzJMtn7TrvN7u9d9e54M/Prjhtm0lUgCMAx4D6lg3VK/oSXs5jSDfINtqCyHEVdzqRpuqqkTPj2b9sfUOeaQ/z5THsz89y6L9i+w+tzdbe3St7YEIsA/wB0KtH2rUG9l2apvttYUQ4ipuFYozd8xkxeEV5Joc97BGnimPJxY+wens0w6r4W12nd1Vugl2AM0BGx4UzjflszV1a+nqCyHEZW4TiiczT/LiLy+SU5Tj8Fr5pnwGLBzg8ecQOoNFtZBZkGn7BBeBo0AL24YXmgs5lnHM9vpCCHEVtwnFZ3981iHbgt1IkaWIjcc38uPBH51Sz5MVmYvQarS2T5AM1AZCbJ/CkbvnCCG8i87VDcClg2eXH16OyWJyWs2cohzG/j6W7mHdnVbT3ZnNZjIyMkhPT+fChQv/+pGenk76hXTMT5ptuvQJXArFO0vXt1FnLN0EQghxmVuE4pStU1yy88z209s5eP4gDSs0dHptR7kSbDcLsVuFXHZ2NoGBgYSEhNzwo3z58tSvX5/y5ctf8+vN45uTXWTdWYkAHAOygCa2//0atAZqBdWyfQIhhLiKW4TijG0zrL90uplLD2icBZoCva2va7KY+G7nd7xz1zvWD3Ygi8Vi9Yrtyl9nZWVdE2z/DLArwXaj0AsKCireocYazao2Y8PxDdb/jSYD4YCP9UOvMOqMRFaPtH0CIYS4istD8WL+RdJy06wfGAh0BA4DRbbVNllMrD6y2rbB/+JKsNmyYsvKyiIgIOCWK7Z69erZNdhKo1NoJzad2IRFtfK1jB6lr51blEuraq1KP5EQQuAGobjt1Db89H5kFGRYNzDi8v+nYnMoAuw6c/PXCSwWC5mZmSUKsn9+TWZm5i0vRYaEhFC3bt0bruhcEWylcW+9e5mUOKlE+9LaW2hwKMG+wU6vK4TwTC4PxT1n91BgLnBZ/ez8bAYOG0heet51IZeZmYm/v/91q7SrP69Tp84NAy84OLhMBVtpdK7TmSCfIKeHYoA+gJfbv+zUmkIIz+byUMwuzKbIXIqlXikpGoWwpmE0qNTghpcidTqX/yNyexqNhlHtR/H6qtedekqGBQuP3faY0+oJITyfy7/jq5f/5yoGvYHHH3+c0GAb9hgTxQa3HMy76951Wij66f2IbRcrh0oLIezK5S/vBxgC0CuuOz3dZDHJN1Y7KOdTjtl9ZjvlbEMNGmoE1uCNjm84vJYQwru4PBQjKkXgo7PhmXwzlx6wUS9/FF3+NSsZtAYqGCtYP1Bcp2uDrvRu3NvhL9OrRSqDAwej17ruhykhhGdy+eXTVtVa2ba92zpg7VWf7wQ6AXdZN81tVW5zycYBnmr6Q9M5lH6I5NPJ5Jvtv22fUWfkv23+y+TnJpNxIIN3333Xax5oEkI4nlucp1h1XFXO5Jxxel2douOV9q/wXpd/nngrSiO7MJv7v7uf7ae323VfUj+dH9Mfmk7MbTGcO3eO/v374+/vz5w5cwgKkvMUhRCl5/LLpwCDWgzCR1uKbU1spFN0DGg+wOl1PV2AIYBVA1cxtPVQu1xK9dX5UtW/Kj8//jMxt8UAULFiRVasWEFoaCi33347Bw8eLHUdIYRwi1B8Luo5l9Q1nTQRNymOs2fPuqS+J/PR+TCh6wRWDVxFrXK18NNa/wCOQWvAV+fLgGYDOPTCITqGdrzm9/V6PV988QUvvfQSd955J8uXL7dX+0IIL+UWoVizXE3uqnMXOsV5tzj99f582vdTzpw5Q+PGjRkyZAj79u1zWn1vcXvN2/nzxT9pe7ItoZpQfHW+BBoC0dzkWA2D1kA5n3IEGAJ4rs1z7H12L1N7TL3lE8JDhw5l/vz5DBo0iPHjx8s5mUIIm7nFPUWAoxeP0mRyE6ccMqxTdLSv1Z41A9eg0WhIS0tjypQpTJ48mZYtWxIbG8vdd98tD+DYSVpaGmFhYRw6dIgcXQ4bj29k08lNrD+2ngv5FzCZTfjofGhYviEdQzsSWT2SdrXa4avztarOsWPH6NmzJ82aNeOrr77C19e68UII4TahCPDlli95eeXLDg/GAH0A+57fR81yNa/59fz8fObMmcP48ePR6/WMHDmS6OhoDAaDQ/vxdGPHjuXw4cPMmDHD4bVycnIYPHgwR44cYeHChVSvXt3hNYUQnsOtQlFVVbrN6ca6o+scdpq6n86PGT1nEN00+pZ9LF++nE8++YS9e/cyfPhwnnnmGUJCSnE8vJcymUzUrVuXJUuW0LJlS6fUVFWV999/n8mTJ7NgwQLatm3rlLpCiLLPLe4pXqHRaFgUvYjW1Vo75AVwP70fH9/78S0D8UofXbt2ZeXKlfz000/s37+f+vXrM3z4cA4fPmz3vjzZ4sWLCQ0NdVogwqV/f//5z3/48ssv6dGjB99++63Tagshyja3CkW49Pj9r0/8yn3177PblmGKRsGoMzL5wck8G/WsVWObN2/OrFmz2L17N4GBgdx+++307duX9evXywMdJfD5558zfPhwl9Tu0aMHa9as4Z133iE2NhaTyeSSPoQQZYdbXT69mqqqxO2OY9iyYeSb8imy2HaShr/enwblGzC331waVWxU6r5ycnKYNWsWn376KRUrViQ2NpbevXvLaRo3sGvXLrp27cqRI0fQ6123JVt6ejqPPPIIiqIQHx8vl8GFEDfltqF4xZnsM4z+bTTxu+NRNEqJH8IJMAQQaAjkPx3+w/9F/h9axb5bgZnNZpYsWcL48eM5ceIEL774Ik899RSBgYF2rVOWPfPMM9SoUYM333zT1a1gMpl4+eWXWbZsGUuWLCE8PNzVLQkh3JDbh+IVmQWZfJv8LbN2zGJv2l40Gg16RV987JQGDXmmPIJ8gri95u0MjxpOl3pdUDSOv0KcmJjIJ598wm+//cbgwYMZPnw4tWrVcnhdd3bhwgXq1avHvn37qFq1qqvbKTZr1ixeeeUVvv76a7p37+7qdoQQbqbMhOLVVFXlzwt/knI+hTxTHlqNlnI+5WhetTnljeVd1teRI0eYOHEi33zzDV27diU2NpZWrVq5rB9XGj9+PElJScyZM8fVrVxn48aN9OvXj+eff57XXntN3kcVQhQrk6Ho7jIyMpg2bRoTJ06kfv36jBw5kgcffBBFcbvnmhzCYrHQsGFD5syZw+233+7qdm7o5MmT9O7dm/r16zNjxgz8/Bx/DqQQwv15x3dpJwsKCmLUqFEcPnyYoUOH8t///peIiAi++uorcnOdczK9K/3888+EhIS49fuBNWrUYO3ateh0Ojp06MDx48dd3ZIQwg1IKDqQXq8nJiaGLVu28NVXX/HTTz9Rp04d3nzzTc6ccf5RWc4yadIkhg8f7vaXJY1GI99++y0xMTG0bduW9evXu7olIYSLSSg6gUajoVOnTixevJg//viDtLQ0wsPDefrpp9mzZ4+r27OrlJQUtm3bxiOPPOLqVkpEo9EwatQovv76a3r37s306dNd3ZIQwoUkFJ0sLCyML7/8kpSUFOrUqcM999xDt27d+PXXXz1iM4AvvviCp59+usxtxt21a1d+//13xo0bx/Dhwykqsu29WCFE2SYP2rhYfn4+33//PePHj0er1TJy5EhiYmLK5CbkWVlZhIaGkpycXGZfSbl48SKPPvoo+fn5JCQkUKFCBVe3JIRwIlkpupivry+DBw9m165dfPTRR8yZM4e6devy/vvvk56e7ur2rPLdd99x1113ldlABAgODmbp0qW0adOGNm3asGvXLle3JIRwIlkpuqGdO3fy6aefsnjxYh599FFGjBhBgwYNXNpTobmQszlnyTflo1f0lDeWJ9Dn7917VFWlSZMmTJ48mc6dO7uuUTuaM2cOI0aMYOrUqfTu3dvV7QghnEBC0Y2dOnWKzz//nKlTp9KhQwdGjhzJHXfc4ZSnOk0WEz+m/MgP+39g4/GNHLl4BL1Wj6JRUFWVQnMhFf0qElk9km4NulE9vTpjRo1h586dbv/UqTW2bt1K7969GTJkCGPGjPGad02F8FYSimVATk4O33zzDZ9++inly5cnNjaWPn36OGQT8ov5F5m4eSITN0+k0FxIVmHWv47x0/uRX5BPlF8UM5+aSeOKje3elyudOnWKvn37Ur16dWbNmkVAQICrWxJCOIiEYhliNptZunQp48eP59ixY8WbkJcrV84u8/908CcGLBxAblEu+aZ8q8drNVoMWgOj7xzN6A6j0Smec3JIQUEBzz77LFu3bmXx4sXUqVPH1S0JIRxAQrGMSkxMZPz48axcuZLBgwfzwgsv2PyAS4GpgMGLB7PowCJyi0q/446/3p/aQbX55fFfqB1Uu9TzuQtVVZk0aRJjx44lPj7eY+6dCiH+JjdIyqioqCji4+PZtm0bFouFFi1a8Oijj5KUlGTVPPmmfO797l4W7l9ol0AEyCnKIeWuKmHNAAAgAElEQVR8Cq2ntubg+YN2mdMdaDQaXnjhBWbPns0jjzzC5MmTPeLdUiHE32Sl6CEyMjKYPn06n332GXXr1iU2Npbu3bvf8sEQk8VEt9nd+OP4HzZdLv03GjRU9KvItme2UbNcTbvP70qHDx+mZ8+e3HHHHUyaNKlMvlcqhLiehKKHKSoqYsGCBXzyySdkZmby0ksv8cQTT9zwFIixv4/lvd/fs9sK8Ua0Gi2R1SPZ8NQGp5xt6UxZWVk8/vjjXLhwgfnz51O5cmVXtySEKCXP+i4l0Ov1REdHk5iYyLRp0/jll1+oU6cOb7zxBqdPny7+un1p+3h33bsODUQAs2pm99ndTNk6xaF1XCEwMJCFCxfSqVMnoqKi2L59u6tbEkKUkqwUvUBKSgoTJkwgPj6e3r1789JLLzHgjwEkn0lGxTn/+v30fvz5wp9UCajilHrOlpCQwLPPPsvnn39eZjZDF0JcT1aKXiAsLIzJkydz8OBB6tWrR+dHO7MzdafTAhHAolr4Kukrp9Vztv79+7Ny5UpeffVVxowZg8VicXVLQggbyErRCz0872Hm75vv1FAEqGCswOlRpz3q/cV/Onv2LP369SM4OJjZs2fb7R1SIYRzyErRy+QV5bEkZYn1gWgCFgOfAmOBLwEr37YoNBey6q9V1g0qYypXrsyvv/5K9erVadeuHYcOHXJ1S0IIK0goepmdZ3bio/OxfqAFKAc8CbwG3A0kABdKPkVeUR4bj2+0vnYZYzAYmDJlCsOHD+eOO+7g119/dXVLQogSklD0Mkmnkigy23CArgG4Cwjh0p+aRkAwcKrkU5hUE2uOrrG+dhk1bNgw5s2bx4ABA5gwYYK86C9EGSCh6GU2n9xMnimv9BNlA+eBStYN231md+lrlyGdOnVi48aNzJw5k8GDB1NQUODqloQQtyCh6GXO5Z4r/SRmYAHQAqtDMdfk2Pci3VGdOnXYsGED2dnZdO7cmVOnrFheCyGcSkLRy5T6VQEL8AOgBR6wYbjqna8q+Pv7M2/ePB544AGioqLYsmWLq1sSQtyAhKKX8Tf42z5YBZYAOcAjXApGKxm03rtHqEaj4Y033mDSpEk88MADzJ4929UtCSH+QULRyzSv0tz29wSXAWlADKC3bYo6wXVsG+hBevXqxerVq3nrrbd45ZVXMJvNrm5JCHGZhKKXaVOjDX766zcH/1cXgSTgNDAOeO/yx07rpulQu4P1tT1Q06ZNSUxMJCkpiR49enDx4kVXtySEQHa08TppOWnU/LQmheZCp9fWFGronN2Zl+55iS5dutzw5A5vU1RURGxsLMuXL2fJkiU0atTI1S0J4dVkpehlKvlXokmlJi6pbfA1cHftuxk/fjxVq1alR48eTJ06lZMnT7qkH3eg1+uZOHEir7zyCh06dOCnn35ydUtCeDVZKXqhubvnMmTpELIKs5xWU9EoPBzxMHH94gC4cOECv/zyC0uXLuWXX36hbt269OjRgx49etCyZctbHo7sqdavX0///v0ZMWIEL7/8MhqNxtUtCeF1JBS9UKG5kCrjqnAx33n3sfz0fqx9ci2R1SOv+z2TycT69etZunQpS5cuJTs7mwcffJAePXp43WXW48eP06tXL8LDw5k2bRpGo9HVLQnhVbzvx3GBQWvgiwe+wF9fitczrOCj9aF7WPcbBiKATqejU6dOjBs3jgMHDrB69WoaN27slZdZa9Wqxe+//47FYqFjx46cOHHC1S0J4VVkpeilVFWl25xurPprFUUWG/ZCtUKIbwiHXzhMiDHE6rG3uszaqlUrj73EqKoqH330ERMnTmT+/Pm0a9fO1S0J4RUkFL3YmewzNJnchPS8dIedrWjUGVn4yELub3B/qefyxsusP/74I4MGDeLDDz9k0KBBrm5HCI8noejl9qbtpf2M9mQWZNo9GI06I5MfnMyTLZ6067xXpKSksGzZMpYuXUpSUhKdOnWiR48ePPjgg9SoUcMhNV1h37599OzZkwceeIBx48ah03nuIc1CuJqEomD/uf10nNmRzIJMCsylP8VBgwZfnS9f9/ya6KbRdujw33n6ZdYLFy4QExOD2Wxm7ty5lC9f3tUtCeGRJBQFABfzL/Lsj8+y+MBicotsP8nCT+9HneA6JPRPIKJShB07LLkbXWbt3r073bt3L9OXWc1mM6+++iqLFi1i8eLFNGnimvdNhfBkEoriGj8f/JnhPw/ndPZpcotyS3xJNdAQiKJReO3O1xjVfpTt+6s6gKddZv3222+JjY1l+vTp9OzZ09XtCOFRJBTFdVRVZdOJTXyy8RN+OfQLFtWCXqsn35SP2WJG0Sj4aH1AAwWmAlpUbcGo9qPo1biX25+C4SmXWRMTE+nTpw/Dhg3j9ddfLzN9C+HuJBTFLamqyrGMYySdSuLoxaMUmAvQK3oq+lWkVbVWhFcKd6tVoTXK+mXW1NRUevfuTWhoKDNnzsTf3znvnQphC1VVsVjyUdVCNBoDiuLrlj/MSSgKcVlKSgpLly5l2bJl11xm7d69O9WrV3d1ezeUn5/PM888w86dO1m0aBGhoaGubkkIAFTVwoULq0hP/4WMjHXk5OzBYilAo9ECFjQaHX5+4QQFdaB8+fsoX77r5d9zLQlFIW6gLF1mVVWVCRMm8NFHHzF37lw6duzo6paEFzOZMjl1ahrHj3+C2ZyF2ZwLWG4xQoNWG4BG40PNmi9So8b/oddXcFa713cjoSjErZWVy6wrVqxgwIABvPPOOzzzzDOubkd4ofT05ezb9zhmcw4WS57V4zUaXxTFh8aNZ1CpUl8HdFiCHiQUhbCOO19mPXjwID179qRz58589tln6PV6l/YjvIPFUsCBA0+TlvYDFovtr3RdoSh+hIR0ITz8e3S6ADt0WHISikKUwj8vs9arV4/u3bu79DJrZmYmjz32GJmZmcyfP59KlSqVar4rD0hYLAUoigFFMbrV5WPhWmZzLsnJ95Kdvd2m1eHNaDQ++PmF0aLFOvT6YLvN+691JRSFsA93usxqNpt54403iIuLY9GiRTRv3rzEY1XVwsWLqzl//ucbPiABWvz8GhMUdOflByQeQCmjTyCL0rFYCklOvpesrEQslny7z6/RGPDzC6dVq/Votc55ulpCUQgHcYfLrPHx8QwfPpwpU6bQt++t79GYTFlXPSCRacUDEgZq1nyB6tWfxWCoaNf+hXs7fPg1Tp6cZJdLpjejKL5UrhxD48ZfO6zG1SQUhXACV15m3bZtG71792bgwIG8/fbbKMr1x6imp69k375HMZtzbfoGd+kBCQONGk2lUqWH5fKqF8jK2sb27Xfa9ZLpzSiKH7fdtpSQkLsdXktCUQgnKyoqYsOGDdddZu3Rowd33323Qy6znjlzhr59+1KpUiW+/fZbAgMDgUuXvw4ceIa0tHl2e0AiOLgzERFx6HTlSj2fcE+qamHz5kbk5x9yWk29vgrt2h1DURy7a5aEohAu5qzLrIWFhTz33HNs2rSJxYsXExpajZ077ycra6vdH5AwGuvRsuUf6PVymocnSk9fyZ49fTCbs51WU6sNJCxsKlWqOPbkHQlFIdyIoy+zqqrK5MmTGTv2HWbPro5Wu9+BD0iE0bLlRqc/Ui8cLzn5Xi5c+NXpdQMCWhAZud2hNSQUhXBTRUVFrF+/vviEj6svs3bp0gWj0Wjz3KtXD6CwcDY+PnZs+B80Gl8qV+5PePi3jisinK6w8CwbN9ZGVW07e/XECRg8GDp1gtdft26sohiJjNyBn1+YTbVLVMNhMwshSkWv19O5c2fGjRvHgQMHWL16NWFhYXzyySdUqVKFHj16MHXqVFJTU62aNzs7Ga12gUMDEUBV80lLW0B6+grHFhJOlZm5GUWx/Q/PZ59B48a2jtaSmbnR5tolIaEoRBkRFhZGbGwsq1ev5ujRozz66KOsWbOGpk2bEhkZydtvv01SUhK3uvijqip79kQ75JLpjVgsuezb9xhms3PqCcfLzEzEbM6xaeyqVeDvD61a2VbbYskmI2ODbYNLSEJRiDIoJCSEmJgYvv/+e86cOcO4cePIzs7m0UcfpWbNmjzzzDMsW7aMvLxrH6C5eHENhYUnoISHR9uDxZJPWlqC0+oJx8rMXA+YrR6XkwMzZ8Jzz5W2vqwUhRC3YM1l1uPHP7b5p3xbmc3ZHDv2oVNrCscpKkq3adzXX8MDD0Apdx3EZMoo3QT/Qh60EcKDXf006x9//MSMGZno9db/J5+ZCR9/DFu3QlAQPP003HNPyccrih+tWyfi79/E6trCdqqqUlRURF5eHnl5eeTn5xf/9Y0+L8nXREevoFIl617FOHQI3n0Xpk0DvR5mzYKTJ61/0AbAYKhB+/YnrB9YQrJhoRAe7Mpl1piYGM6eXcbevdGA9SvFzz4DnQ5++OHSN7jRo6F+fahbt6QzaMjI2OjVoaiqKgUFBSUKHlvC6mZjFEXBaDQWf/j6+pb48woVKlz3+yEhOwHrXtrfsQPOnIFHHrn0eV4eWCxw9ChMnWrdP0dHv7wvoSiEl8jN3Q4UWj0uLw/Wrbt0+ctohNtug/btYeVKGDq0ZHNYLDlkZKynevWnra7vCBaLhfz8fKeE05XP8/Pz0ev1JQ6nf/5auXLlrAq0K5/rdPb9Nr9v3xLOnLEuFLt3h7uv2qFt7lw4fRpeesn6+kZjQ+sHWUFCUQgvkZGxHiiyetyJE6DVQq1af/9a/fqQnGzdPFlZm2/462az2eagsXVMYWEhPj4+Nq2eLq2WQqwe4+Pjg1artfqfv7sJCrqTtLQFVm0L6Ot76eMKoxEMBgi28kQojUZPcHBn6wZZSUJRCC9RVHTOpnF5efDP7Vj9/SHXyq1ST506SHh4+HVhZTKZbL68FxAQQMWKFa1acV0JKNm03DaBga0vHyNmuyeftG2cohgJDIwsVe1/I6EohNew/jF6uPRT/T8DMDf3+qD8N8HB5ViwYMF1YWUwGCSgypCAgBZoND5AltNrq6qZoKA7HVpDXskQwksoim3bwtWsCWbzpcuoVxw6BHXqWDePwRBAREQEdevWpWrVqgQFBcmKrQzSaLTUrPkiGo3vv3+xXevqqVZtMFqt7dsbloSEohBews8vwqZxRiN06HDpxeu8PNi1CzZsgHvvtXae+jbVF+6nevWhOPtnGY1GS40aLzi8joSiEF4iKOgOFMXfprEjRkBBAfTpc+l9sxEjrHkdA0BHcHAnm2oL92MwVKZmzZdQFPuf/XkjiuJL5cqP4efXwOG15OV9IbxEdvZOtm+/w6ln4F2h0QTQtOlcKlR4wOm1hWNYLIX8/nt9zOYTKA5eXun1VWjb9pBTjiGTlaIQXsLfv6nNK8XSysvLpk+f/zJx4kROnz7tkh6EfX377feMGpUNOPa4FUUx0qTJXKedyymhKISX0GiUy5e8HPugwvV0hIYOYfTo/5KUlER4eDhdunRh+vTppKfbto+mcJ2CggL+7//+j/fff59Zs/6gefNFDvszpShGGjWa4dRL73L5VAgvUlR0no0bazrt6Ci49I2tdett+PtfOkQvPz+fn376ifj4eFasWEGHDh2Ijo6mZ8+eBAQ4ZzUgbHPixAn69etHtWrVmDVrFkFBQQCkp69g9+4+WCx5gMUOlTQoipHGjWdRuXJ/O8xXcrJSFMKL6PUVqFXrVac9IKHR+FKp0sPFgQjg6+tLnz59mDdvHsePHyc6Opq4uDhq1KjBww8/zMKFC8nPl/MX3c2aNWuIioqiZ8+eLFiwoDgQAcqXv4/IyB0EBLQo9SV6RfHDaAyjVatNTg9EkJWiEF7HYiliy5bbyMtLwdHnKur1lS4/IFHuX782PT2dBQsWEB8fz/bt23nooYeIjo6mS5cu6PV6h/Ypbk5VVcaPH8/HH3/Md999x723eBdHVS2cODGRI0feAixWPdSlKAGASq1arxAaOhpFcc2/cwlFIbxQdvYutm273ar9K62lKEZuu20ZISF3//sX/8OpU6dISEggLi6Ow4cP07dvX6Kjo+nQoQOKox91FMWys7N56qmnOHz4MAsWLCA0NLRE4yyWQtLSfuD48Y/IydmFovhhsRShqn8feq3R+KAoPlgseRiNDahV6xUqV37E4S/n/xsJRSG8VHr6r+ze3dMhwagoRsLCplC16hOlnuuvv/5i7ty5xMfHc+7cOR5++GFiYmKIjIyU3XAc6MCBA/Tp04fbb7+dL774Al9f23awMZtzyc5OJisriYKC45jNuWi1fhgM1QgMjCQgoIXTniwtCQlFIbzYhQur2LXrocsP3ti2N+q1NCiKL40afU2VKtF2mO9a+/btIz4+nri4OCwWC9HR0URHR9O0aVO71/JmixYtYujQobz77rsMGTLEq374kFAUwsvl5f3J3r3R5OTsxWKx/gDiKwoKFIKD69OkyXwCAprZscPrqarK9u3biYuLY+7cuQQFBRUHZP36sp2crcxmM2+88QazZ89m/vz5REVFubolp5NQFEKgqhZOnvyCv/56EzBjNpf8BIQrD0isXBlMvXpvMWjQEIf1eSMWi4WNGzcSFxdHQkICoaGhxMTE8PDDD1OjRg2n9lKWnTt3jkcffRSTyUR8fDyVK1d2dUsuIaEohChmsRRy7twijh37kJycnSiKH7m5mfhctWnJ1Q9I+PrWp3btl6lcOZpt2/bQs2dP9u/fT7ly//60qSOYTCZWr15NXFwcixYtolmzZkRHR9OvXz8qVqzokp7KgqSkJPr27csjjzzCe++9h07nvacKSigKIW7IbM4jLW0jL7zQlfffH4nFkoei+F5+QKL15QckAq8ZM2jQICpXrsyHH37ooq7/VlBQwPLly4mLi+Pnn3+mXbt2xMTE0KtXL5eFtjv6+uuvefXVV5kyZQp9+/Z1dTsuJ6EohLipnTt3EhMTw549e0r09adOneK2225j48aNNGzY0MHdlVxOTg5Lly4lLi6ONWvWcM899xAdHU337t0xGl37CsCtmEzZZGdvJysricLCk5jN+Wi1vvj41CQgoHWpntwsKCjghRdeYO3atSxcuJDw8HA7d182SSgKIW5q8eLFTJs2jWXLlpV4zIcffsiGDRtYvHixAzuz3YULF1i0aBFxcXEkJibSvXt3YmJiuPfeezEYDK5u7/IKfR7Hjn1EXl7K5Xf8ClDVguKvURRfNBoDFksufn6NqVXrFSpV6o9WW7LXJo4fP06/fv2oWbMmM2fOlJXzVeQtWCHETf3111/Ute7gREaMGMGePXtYsWKFg7oqnZCQEAYNGsSKFSs4cOAA7dq144MPPqB69eoMHTqUVatWYTbb4/UU61gsJo4eHcuGDZU5ePB5cnP3oqomzObMawLx0tfmX/51Ezk5uzl48Fk2bKjMsWMfoqq37n3VqlVERUXRp08f5s+fL4H4D7JSFELc1IsvvkhoaCgjR460atySJUsYPXo0O3bsKDNbtB07dox58+YRFxdHampq8SYBbdu2dfh7ejk5e9iz52Hy84+W6rUYRfHHaKxHRMS8a/abhUuvsYwbN45PPvmEOXPm0KVLl9K27ZFkpSiEuClbVooAPXr0oEaNGkyZMsUBXTlG7dq1GTVqFElJSaxdu5YKFSowaNAg6tWrx+jRo0lOTsYRa4i0tEUkJUWRm7uvVIEIYLHkkJOzh6Sk1pw79/cl76ysLB5++GHmzZtHYmKiBOItyEpRCHFTt912G9999x0tWrSweuyePXu466672Lt3b5l9HUJVVXbu3El8fDzx8fH4+voSExNDdHQ0YWFhpZ7/7Nn57N//xOUjl+xLUYxERMRz7lwYffr04Y477mDSpEk2b9fmLSQUhRA3pKoqgYGBnDx58ppjgqzxwgsvYDKZmDx5sp27cz5VVdm8eTNxcXHMmzeP6tWrF28SULt2bavny8jYQHLyPQ4JxCtU1cB//mPkqafG8fTTTzusjieRUBRC3FBaWhqNGzfm/PnzNs+Rnp5OeHg4K1eupFkzx2795kxms5m1a9cSHx/PDz/8QOPGjYmJiaFfv35UqVKlBONz2by5IYWFqQ7vVaOpwp13Hinxk6neTu4pCiFuyNb7iVcrX748b731FiNGjHDI/ThX0Wq13H333UydOpXU1FRGjx7Nxo0badSoEffddx9ff/01Fy9evOn4w4dHYTJdcEqvGk0mf/452im1PIGEohDihuwRigBDhw4lLS2NhQsX2qEr92MwGHjwwQeZPXs2qampDBkyhB9//JHQ0FB69uxJfHw8OTl/P0CTn3+U06dnOvSy6dUsljxOnZpCQcFJp9Qr6yQUhRA3ZK9Q1Ol0TJgwgdjYWPLz8+3Qmfvy8/Ojf//+LFiwgGPHjtG3b1+++eYbatSoQUxMDIsXL+bo0c9QVYtT+1JVlZMnv3RqzbJKQlEIcUP2CkWALl260LJlS8aPH2+X+cqCoKAgnnjiCX7++WcOHTpEp06dmDjxEw4f/hRVLXRqL6paQGrqF1gsRU6tWxZJKAohbsieoQgUvzh+8qT3XcarWLEiw4YNY8GC/xEQEPjvA25i1SoYOBC6dYPHHoOdO0s+VlUtZGVttbm2t5BQFELc0J9//mnXUKxXrx7PPPMMo0d770MfWVlJNq8St26FqVPh1Vfhxx9hwgSoVq3k4y2WIrKykmyq7U0kFIUQ1zGbzRw/fpzQ0FC7zjt69Gh+++03Nm3aZNd5y4qLF9det49pSc2aBQMGQEQEKApUqnTpo6RUNY+MjHU21fYmEopCiOucPHmSihUr2n33k8DAQN5//31efPFFLBbnPmziDvLyUmwaZzbDgQOQkXHpsmn//vDZZ1BgZb7m5u63qb43kVAUQlzH3vcTr/b4448DMHv2bIfM784sFtuevr1wAUwmWLsWJk6E6dPh4EH47jtr69u2SvUmEopCiOs4MhQVRWHixImMHj2arKwsh9RwXzqbRvn4XPr/3r2hQgUICrq0Wty82bp5NBqtTfW9iYSiEOI6jgxFgLZt23LPPfcwduxYh9VwRzpdsE3jAgMv3T+8+gQrW06z0ulCbKrvTSQUhRDXcXQoArz//vtMmzaNw4cPO7SOOzCZTCQnJ3P8eDlsvZXatSssXHjpUmpWFsyfD+3aWTODQlDQHbYV9yISikKI6zgjFKtXr05sbCyjRo1yaB1nU1WVv/76i7lz5xIbG0uHDh0IDg4mOjqa5OQiVNXHpnmfeAIaNbr0BOrAgdCgAVy+PVsiWm0A5cq1tam2N5FTMoQQ16lZsybr16+3+ysZ/5Sfn09ERARTp07lnnvucWgtR0lLS2PLli0kJiaSmJjIli1bMBgMREVFFX9ERkYSFBREXt4RtmwJt/mBm9LQaHxp2/Ygvr41nV67LLHtrq8QosxTVZX8/L/IykoiMzORwsITWCyFgIH77juFn99OCgt9MRj+/SgkW/n6+vLJJ58wYsQIduzYgU7n3t+ScnNz2bZtW3EAJiYmcv78edq0aUNUVBRDhw5l2rRp1KhR44bjjcY6+Ps3Iysr0cmdQ7lybSQQS0BWikJ4mYKC06SmTuHkyc+xWPLQaHSYzdnA3ze7TCbw8SmHxVKAj08NatV6hSpVHkOnC7B7P6qqcs8999CnTx+ee+45u89vK5PJxN69e68JwJSUFJo2bXrNKjAsLAxFKfmdqLS0hezfPxCz2XlP3mq1gURExFGhwoNOq1lWSSgK4SVMpkwOHhzO2bNz0Wg0Vl3CUxR/QKV27dHUrv0aimLfFd2uXbvo0qUL+/bto0KFCnaduyRUVeXIkSPXBOD27dupVasWUVFRxSvB5s2b4+Nj2z3BKywWE5s21aGw0Hl7wPr4hHL77YfllYwSkFAUwgukp69g377HMJuzS3U/S1H88fUNpUmTBPz9I+zYITz33HMoisKkSZPsOu+NnDt37pr7gImJiej1etq2bVu8AmzdujXBwba9QvFvMjLWk5x8r1POVFQUIy1arKFcuSiH1/IEEopCeLijR8dy9Oh7WCy5dppRg6IYadJkPhUqdLPTnHD+/HnCw8NZtWoVTZs2tdu8V98HvBKE586dIzIy8prLoDe7D+goKSnPOfywYUUxUr36MBo08J4ju0pLQlEID3bkyDscO/ahHQPxb38H4wN2m3PSpEksXryYlStXorHh7XRH3Qd0BLM5nx07OpCdvcvmTcJvRaPxJTCwFS1arEJRSnfJ15tIKArhoU6fnkNKylCHBOIViuJHq1YbCQhoZpf5ioqKaNGiBWPHjqVnz563/FpVVTl69Oh19wFr1KhxTQDa4z6go5hMmezYcRe5uXvt+pqGovji79+M5s1/c8jDUZ5MQlEID1RQkEpiYqPLT5U6kgajsSFt2uxGUfR2mXHlypUMGzaMvXv3XhNm/7wPuGXLFnQ6XfF9wDZt2hAZGemw+4COYjbnsm/fQNLTf7LLDzCK4kfFig/RqNFMtFr7nnLiDSQUhfAwqqqSnHwPFy+uA0wOr6coftSsOZJ69f5ntzm7d+9OrVq1CAsLKw5Bd7gP6Ejnzi1h//4nsVjybbrPqChGFMVIePh3dr2k7W0kFIXwMBcv/sHOnV2xWHKcVlNRfGnXLhW93voNp81m83X3AQ8cOEBhYSGPP/44d911F1FRUTRq1Mjl9wEdrajoIqmpUzlxYjwWS+7llf6tvkVr0Gr90WoDqFkzlurVh6DTBTmrXY8koSiEh9m1qyfnzy/l1t9M7UtR/Khb93/UqjXyll9nzX3At956i9OnTzNr1izn/E24EVW1cOHCStLTl3Px4u/k5u7BYilEo1FQVQuK4oO/fxOCgjpSvvz9hIR0QaPx7B8YnEVCUQgPUlh4lo0bQ1FV5++taTBUp127E9c8NXr+/Pnr3gfUarXXvA94s/uAWVlZNGrUiEWLFhEV5d3v2KmqiqoWYrEUoCg+aDQGm57OFf9OQlEID3LmzBxSUobZ9IDNiPkvwb0AAAldSURBVBGwdy9oL296UqkSfPttyccrih9a7dds2XKqOADT0tKuuQ/Ypk0batSoUeJv6LNmzWLKlCls2LDB4y+dCvcgoSiEB0lJeZ7U1MnYcul0xAi491540MbtMXNz4Ycf6uHj07U4BEt7H9BisdC2bVtefPFFHrfmnCQhbOTeW9ILIaySkfEHzryXeDU/Pw2vvtqdhg0/s9uciqIwceJE+vfvT69evQgIkHfuhGPJ9QghPEhBwYlSjZ82DXr2hOefhx07rB2tkpu7r1T1b6Rdu3Z07tyZDz74wO5zC/FPcvlUCA/y++9BmM2ZNo3duxfq1AGdDlatgokTL4WkNa8CBgV1oGXLdTbVv5UTJ07QvHlztm7dSt26de0+vxBXyEpRCA9SmsfyIyLAzw8MBujaFZo2hc2bra1vn11t/qlmzZq89NJLvPzyyw6ZX4grJBSF8CA6nf22ONNowNrrSAZDFbvV/6fY2Fi2bt3K6tWrHVZDCAlFITxIQECkTeOysyExEQoLwWyGlSth506w5vVARTESFHSnTfVLwmg0Mm7cOEaMGIHJ5Pjt64R3klAUwoMEB3dEUazfBNpkgq+/hl69Lj1os3Ah/O9/UKtWyefQaPQEBra2urY1+vbtS0hICNOnT3doHeG95EEbITxIVtYOtm+/06n7nl6hKEbuuOM8Wq3RoXWSk5O577772L9/PyEh1u+1KsStyEpRCA8SGNgCH5+aLqispXLlGIcHIkDz5s3p06cPb7/9tsNrCe8jK0UhPMypUzM5ePAFLBZHn6X4N0Ux0qpVIgEBTZ1SLy0tjYiICNauXUtERIRTagrvICtFITxM5crRTlmxXaHR6AgMjHJaIAJUqlSJMWPGMGLECOTnemFPEopCeBit1kh4+BwUxc8p9TQaH8LDrdg53E6effZZjh8/zrJly5xeW3guCUUhPFD58vdSqVI/m55EtYai+NOgwXh8fWs7tM6N6PV6JkyYwMiRIykoKHB6feGZJBSF8FBhYV/i59cYjcbHIfMrih+VKvWjWrUhDpm/JO6//34aN27MxIkTXdaD8CzyoI0QHsxkymD79k7k5h6w68HDiuJHxYq9CQ//Bo1Ga7d5bZGSkkL79u3ZvXs3VatWdWkvouyTUBTCw5nNOezf/xTnzy/FYskt5WwaFMWXWrVepU6dN93m9PeXX36Z9PR0ZsyY4epWRBknoSiElzh//if27RuAxZJvUzhqtQEYDDVo0mQeAQHNHNCh7TIyMmjcuDFLly4lMvL6re5UVSU//ygm0wVU1YxWa8TXty5arXMeRhJlh4SiEF7EZMri9OlvOH78Y0ymdMzmfODm+4hqNP/f3v2FSFUFcBz/nXN3x9nZGV1t0xZ1NfFPrcUKxfZQlEQgUdFmFIXUc2QFRaQIUfgSPQQKZVjqU1AQglQgiv2R7C/9k1YQDRGz1nXF2WHGXXd27j097O0hyHJmd+7s2f1+3s/53af97Z17/sySMVaZTJc6Ozepvb1X1tbnJoyJ2r17t/bs2aMjR47IGKPh4d/U3/+u8vlD8T2PRsb8fa+6UxSNKJXqUC53mxYs2KBrrrlf1nLv+kxHKQIzkHNOhcIRDQ19oaGhwyqVjioMC3IulDHNam5uVy53q9ra7tK8eevU2jr1N8iHYaienh5t2bJO119/WKXST3IulHNj/zs2CHIyplkLFz6nxYtfUFNTLoEnxlREKQKYFsbG8vr66/UaGTmsdLq2P2vWphUEs9XV9b7mzr17kp8QPqAUAXhvaOhL9fU9qDAclnMT37NobYsWLHhSK1e+1fDVtUgWpQjAaxcvHlRf30OTsLL2n6zNaO7ce7R69V6+Nc4gbN4H4K1C4Zu6FKIkRdGw8vlDOn78Cc5XnUEoRQBeqlSK6ut7sC6F+LcoGtaFCx9rYOC9umVgaqEUAXjp5MlnVakU654TRZd08uTTGh3tr3sWGo9SBOCdQuFbDQ5+OKlH1/2XMBzViRNPJZKFxqIUAXjnzJnXFEUjCSaOKZ8/qNHRPxPMRCNQigC8Ui4P6OLFg5KSXfzinNMff7ydaCaSRykC8Mr58x/UdBD5uXPS5s3SAw9I69dL27dLYXj1450bVX//O1Xnwi+UIgCv5POf1vTT6bZtUlubtHevtGuXdPSotG9fdXNUKkMqlwerzoY/KEUAXikWf6hpXH+/tHatlEpJ8+ZJPT3S6dPVzWFti4rFH2vKhx8oRQDeCMPLKpfP1zT24Yelzz6TLl+WBgel774bL8ZqRNGwLl36taZ8+IGziwB4IwxLMqZJzlXxMTDW3S198ol0331SFEnr1kl33FHdHM6NqVIpVJ0Nf/CmCMAjrqZFNlEkbdok3XmntH//+LfEYlHaubOWZ6i+kOEPShGAN6xtkXNXvhT5SopFaWBA6u0d/6Y4Z450773jP6FWJ1AQcNfidEYpAvBGU1O2plKaM0fq6JA++mh8G0apJB04IC1bVt08QdCqTOaGqvPhD0oRgFey2e6axm3dKn3//fjb4oYNUhBIGzdWN4dzFeVyt9SUDz+w0AaAV9ra1qpQ+ErOjVU1bvny8b2KE2GM1axZnRObBFMab4oAvDJ//mMyphH/zwe69tpHa1roA39QigC8ksmsUmvrzYnnWpvSokXPJ56LZFGKALzT2blZ1rYmmGiUyXQpm70pwUw0AqUIwDvt7b3KZrsT+xnV2rRWrdqVSBYai1IE4B1jjLq63pcxs+qeZW1Gixa9oFxuTd2z0HiUIgAvpdOdWrlyh6zN1C3DmJRaWlZo6dJX6paBqYVSBOCt6657UkuXvlqXYjQmpXR6idas+VzWNk/6/JiajHMu2eurAWCSnT27Q6dOvVjTPYv/xtqMMpkb1d19SM3NbZMyJ/xAKQKYForFX3Ts2CMql/9UFA3XOIuRtWktWfKyOjtfkjHBpD4jpj5KEcC0EUVlnTnzun7//Q1JocKwdFXjjEnJGKvZs2/XihVvqrWV801nKkoRwLQTRWO6cGGfzp7dplLpZ0kmvoexIslJsjImUBQNK5Xq0Pz5j2vhwmeUTi9u8JOj0ShFANOac5FGRk6pVPpFlcqQnKsoCFrU0rJC2Wy3giDJQwAw1VGKAADE2JIBAECMUgQAIEYpAgAQoxQBAIhRigAAxChFAABilCIAADFKEQCAGKUIAECMUgQAIEYpAgAQoxQBAIhRigAAxP4CFbmhaac9TskAAAAASUVORK5CYII=\n",
"text/plain": [
"<Figure size 432x288 with 1 Axes>"
]
......@@ -176,14 +176,14 @@
"outputs": [],
"source": [
"# 自定义GCN层函数\n",
"def gcn_layer(gw, feature, hidden_size, name, activation):\n",
"def gcn_layer(gw, nfeat, efeat, hidden_size, name, activation):\n",
" # gw是一个GraphWrapper;feature是节点的特征向量。\n",
" \n",
" # 定义message函数,\n",
" def send_func(src_feat, dst_feat, edge_feat): \n",
" # 注意: 这里三个参数是固定的,虽然我们只用到了第一个参数。\n",
" # 在本教程中,我们直接返回源节点的特征向量作为message。用户也可以自定义message函数的内容。\n",
" return src_feat['h']\n",
" return src_feat['h'] * edge_feat['e']\n",
"\n",
" # 定义reduce函数,参数feat其实是从message函数那里获得的。\n",
" def recv_func(feat):\n",
......@@ -192,7 +192,7 @@
" return fluid.layers.sequence_pool(feat, pool_type='sum')\n",
"\n",
" # send函数触发message函数,发送消息,并将返回消息。\n",
" msg = gw.send(send_func, nfeat_list=[('h', feature)])\n",
" msg = gw.send(send_func, nfeat_list=[('h', nfeat)], efeat_list=[('e', efeat)])\n",
" # recv函数接收消息,并触发reduce函数,对消息进行处理。\n",
" output = gw.recv(msg, recv_func) \n",
" # 以activation为激活函数的全连接输出层。\n",
......@@ -211,21 +211,14 @@
"cell_type": "code",
"execution_count": 7,
"metadata": {},
"outputs": [
{
"name": "stderr",
"output_type": "stream",
"text": [
"/Users/huangzhengjie/Workspace/baidu/nlp-gnn/pgl/pgl/utils/paddle_helper.py:48: UserWarning: Your paddle version is less than 1.5 gather may be slower.\n",
" warnings.warn(\"Your paddle version is less than 1.5\"\n"
]
}
],
"outputs": [],
"source": [
"# 第一层GCN将特征向量从16维映射到8维,激活函数使用relu。\n",
"output = gcn_layer(gw, gw.node_feat['feature'], hidden_size=8, name='gcn_layer_1', activation='relu')\n",
"output = gcn_layer(gw, gw.node_feat['feature'], gw.edge_feat['edge_feature'], \n",
" hidden_size=8, name='gcn_layer_1', activation='relu')\n",
"# 第二层GCN将特征向量从8维映射导2维,对应我们的二分类。不使用激活函数。\n",
"output = gcn_layer(gw, output, hidden_size=1, name='gcn_layer_2', activation=None)"
"output = gcn_layer(gw, output, gw.edge_feat['edge_feature'], \n",
" hidden_size=1, name='gcn_layer_2', activation=None)"
]
},
{
......@@ -268,36 +261,36 @@
"name": "stdout",
"output_type": "stream",
"text": [
"Epoch 0 | Loss: 0.712927\n",
"Epoch 1 | Loss: 0.665513\n",
"Epoch 2 | Loss: 0.625431\n",
"Epoch 3 | Loss: 0.591621\n",
"Epoch 4 | Loss: 0.563292\n",
"Epoch 5 | Loss: 0.539553\n",
"Epoch 6 | Loss: 0.519604\n",
"Epoch 7 | Loss: 0.502797\n",
"Epoch 8 | Loss: 0.488625\n",
"Epoch 9 | Loss: 0.476778\n",
"Epoch 10 | Loss: 0.466839\n",
"Epoch 11 | Loss: 0.458521\n",
"Epoch 12 | Loss: 0.451596\n",
"Epoch 13 | Loss: 0.445855\n",
"Epoch 14 | Loss: 0.441109\n",
"Epoch 15 | Loss: 0.437194\n",
"Epoch 16 | Loss: 0.434423\n",
"Epoch 17 | Loss: 0.432126\n",
"Epoch 18 | Loss: 0.430175\n",
"Epoch 19 | Loss: 0.428500\n",
"Epoch 20 | Loss: 0.427060\n",
"Epoch 21 | Loss: 0.425821\n",
"Epoch 22 | Loss: 0.424751\n",
"Epoch 23 | Loss: 0.423827\n",
"Epoch 24 | Loss: 0.423026\n",
"Epoch 25 | Loss: 0.422332\n",
"Epoch 26 | Loss: 0.421729\n",
"Epoch 27 | Loss: 0.421204\n",
"Epoch 28 | Loss: 0.420746\n",
"Epoch 29 | Loss: 0.420345\n"
"Epoch 0 | Loss: 0.629119\n",
"Epoch 1 | Loss: 0.614591\n",
"Epoch 2 | Loss: 0.602767\n",
"Epoch 3 | Loss: 0.593824\n",
"Epoch 4 | Loss: 0.587454\n",
"Epoch 5 | Loss: 0.581866\n",
"Epoch 6 | Loss: 0.576963\n",
"Epoch 7 | Loss: 0.572337\n",
"Epoch 8 | Loss: 0.567905\n",
"Epoch 9 | Loss: 0.563806\n",
"Epoch 10 | Loss: 0.559831\n",
"Epoch 11 | Loss: 0.555969\n",
"Epoch 12 | Loss: 0.552211\n",
"Epoch 13 | Loss: 0.548553\n",
"Epoch 14 | Loss: 0.544992\n",
"Epoch 15 | Loss: 0.541524\n",
"Epoch 16 | Loss: 0.538145\n",
"Epoch 17 | Loss: 0.534852\n",
"Epoch 18 | Loss: 0.531641\n",
"Epoch 19 | Loss: 0.528505\n",
"Epoch 20 | Loss: 0.525442\n",
"Epoch 21 | Loss: 0.522446\n",
"Epoch 22 | Loss: 0.519513\n",
"Epoch 23 | Loss: 0.516638\n",
"Epoch 24 | Loss: 0.513819\n",
"Epoch 25 | Loss: 0.511053\n",
"Epoch 26 | Loss: 0.508336\n",
"Epoch 27 | Loss: 0.505668\n",
"Epoch 28 | Loss: 0.503046\n",
"Epoch 29 | Loss: 0.500472\n"
]
}
],
......@@ -349,9 +342,9 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.6.7"
"version": "3.6.5"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册