diff --git a/README.md b/README.md index 4814fbb9e857c9bcbb1513c5622500454477b942..32124165e392a103852dea86ea8595f996b3f31a 100644 --- a/README.md +++ b/README.md @@ -4,12 +4,11 @@ English | [简体中文](README_cn.md) PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice. **Recent updates** +- 2020.8.16, Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294) - 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519) - 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite) - 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight OCR model are provided. - 2020.7.15, Add several related datasets, data annotation and synthesis tools. -- 2020.7.9 Add a new model to support recognize the character "space". -- 2020.7.9 Add the data augument and learning rate decay strategies during training. - [more](./doc/doc_en/update_en.md) ## Features @@ -91,7 +90,7 @@ Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Andr PaddleOCR open source text detection algorithms list: - [x] EAST([paper](https://arxiv.org/abs/1704.03155)) - [x] DB([paper](https://arxiv.org/abs/1911.08947)) -- [ ] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research, comming soon) +- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research) On the ICDAR2015 dataset, the text detection result is as follows: @@ -101,6 +100,13 @@ On the ICDAR2015 dataset, the text detection result is as follows: |EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)| |DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)| |DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)| +|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)| + +On Total-Text dataset, the text detection result is as follows: + +|Model|Backbone|precision|recall|Hmean|Download link| +|-|-|-|-|-|-| +|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)| For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows: |Model|Backbone|Configuration file|Pre-trained model| @@ -120,7 +126,7 @@ PaddleOCR open-source text recognition algorithms list: - [x] Rosetta([paper](https://arxiv.org/abs/1910.05085)) - [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) - [x] RARE([paper](https://arxiv.org/abs/1603.03915v1)) -- [ ] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research, comming soon) +- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research) Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow: @@ -134,8 +140,14 @@ Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation r |STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)| |RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)| |RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)| +|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)| + +**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](todo). + +The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar). We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w traning data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows: + |Model|Backbone|Configuration file|Pre-trained model| |-|-|-|-| |ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)| diff --git a/README_cn.md b/README_cn.md index cc5cb00a38d97fd3dba46b30a76f2dc606e8d027..c797097c78064e66deb07bf32dc1bcd9a0093d5f 100644 --- a/README_cn.md +++ b/README_cn.md @@ -4,12 +4,11 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力使用者训练出更好的模型,并应用落地。 **近期更新** +- 2020.8.16 开源文本检测算法[SAST](https://arxiv.org/abs/1908.05498)和文本识别算法[SRN](https://arxiv.org/abs/2003.12294) - 2020.7.23 发布7月21日B站直播课回放和PPT,PaddleOCR开源大礼包全面解读,[获取地址](https://aistudio.baidu.com/aistudio/course/introduce/1519) - 2020.7.15 添加基于EasyEdge和Paddle-Lite的移动端DEMO,支持iOS和Android系统 - 2020.7.15 完善预测部署,添加基于C++预测引擎推理、服务化部署和端侧部署方案,以及超轻量级中文OCR模型预测耗时Benchmark - 2020.7.15 整理OCR相关数据集、常用数据标注以及合成工具 -- 2020.7.9 添加支持空格的识别模型,识别效果,预测及训练方式请参考快速开始和文本识别训练相关文档 -- 2020.7.9 添加数据增强、学习率衰减策略,具体参考[配置文件](./doc/doc_ch/config.md) - [more](./doc/doc_ch/update.md) @@ -93,7 +92,7 @@ PaddleOCR旨在打造一套丰富、领先、且实用的OCR工具库,助力 PaddleOCR开源的文本检测算法列表: - [x] EAST([paper](https://arxiv.org/abs/1704.03155)) - [x] DB([paper](https://arxiv.org/abs/1911.08947)) -- [ ] SAST([paper](https://arxiv.org/abs/1908.05498))(百度自研, coming soon) +- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(百度自研) 在ICDAR2015文本检测公开数据集上,算法效果如下: @@ -103,8 +102,16 @@ PaddleOCR开源的文本检测算法列表: |EAST|MobileNetV3|81.67%|79.83%|80.74%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)| |DB|ResNet50_vd|83.79%|80.65%|82.19%|[下载链接](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)| |DB|MobileNetV3|75.92%|73.18%|74.53%|[下载链接](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)| +|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)| + +在Total-text文本检测公开数据集上,算法效果如下: + +|模型|骨干网络|precision|recall|Hmean|下载链接| +|-|-|-|-|-|-| +|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[下载链接](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)| 使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集共3w张数据,训练中文检测模型的相关配置和预训练文件如下: + |模型|骨干网络|配置文件|预训练模型| |-|-|-|-| |超轻量中文模型|MobileNetV3|det_mv3_db.yml|[下载链接](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)| @@ -122,7 +129,7 @@ PaddleOCR开源的文本识别算法列表: - [x] Rosetta([paper](https://arxiv.org/abs/1910.05085)) - [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html)) - [x] RARE([paper](https://arxiv.org/abs/1603.03915v1)) -- [ ] SRN([paper](https://arxiv.org/abs/2003.12294))(百度自研, coming soon) +- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(百度自研) 参考[DTRB](https://arxiv.org/abs/1904.01906)文字识别训练和评估流程,使用MJSynth和SynthText两个文字识别数据集训练,在IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE数据集上进行评估,算法效果如下: @@ -136,6 +143,10 @@ PaddleOCR开源的文本识别算法列表: |STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)| |RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)| |RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[下载链接](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)| +|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)| + +**说明:** SRN模型使用了数据扰动方法对上述提到对两个训练集进行增广,增广后的数据可以在[百度网盘](todo)上下载。 +原始论文使用两阶段训练平均精度为89.74%,PaddleOCR中使用one-stage训练,平均精度为88.33%。两种预训练权重均在[下载链接](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)中。 使用[LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_ch/datasets.md#1icdar2019-lsvt)街景数据集根据真值将图crop出来30w数据,进行位置校准。此外基于LSVT语料生成500w合成数据训练中文模型,相关配置和预训练文件如下: diff --git a/configs/det/det_r50_vd_sast_icdar15.yml b/configs/det/det_r50_vd_sast_icdar15.yml new file mode 100644 index 0000000000000000000000000000000000000000..f1ecd61dc8ccb14fde98c2fc55cb2c9e630b5c44 --- /dev/null +++ b/configs/det/det_r50_vd_sast_icdar15.yml @@ -0,0 +1,50 @@ +Global: + algorithm: SAST + use_gpu: true + epoch_num: 2000 + log_smooth_window: 20 + print_batch_step: 2 + save_model_dir: ./output/det_sast/ + save_epoch_step: 20 + eval_batch_step: 5000 + train_batch_size_per_card: 8 + test_batch_size_per_card: 8 + image_shape: [3, 512, 512] + reader_yml: ./configs/det/det_sast_icdar15_reader.yml + pretrain_weights: ./pretrain_models/ResNet50_vd_ssld_pretrained/ + save_res_path: ./output/det_sast/predicts_sast.txt + checkpoints: + save_inference_dir: + +Architecture: + function: ppocr.modeling.architectures.det_model,DetModel + +Backbone: + function: ppocr.modeling.backbones.det_resnet_vd_sast,ResNet + layers: 50 + +Head: + function: ppocr.modeling.heads.det_sast_head,SASTHead + model_name: large + only_fpn_up: False +# with_cab: False + with_cab: True + +Loss: + function: ppocr.modeling.losses.det_sast_loss,SASTLoss + +Optimizer: + function: ppocr.optimizer,RMSProp + base_lr: 0.001 + decay: + function: piecewise_decay + boundaries: [30000, 50000, 80000, 100000, 150000] + decay_rate: 0.3 + +PostProcess: + function: ppocr.postprocess.sast_postprocess,SASTPostProcess + score_thresh: 0.5 + sample_pts_num: 2 + nms_thresh: 0.2 + expand_scale: 1.0 + shrink_ratio_of_width: 0.3 \ No newline at end of file diff --git a/configs/det/det_r50_vd_sast_totaltext.yml b/configs/det/det_r50_vd_sast_totaltext.yml new file mode 100644 index 0000000000000000000000000000000000000000..ec42ce6d4bafd0c5d4360a255f35d07e83f90787 --- /dev/null +++ b/configs/det/det_r50_vd_sast_totaltext.yml @@ -0,0 +1,50 @@ +Global: + algorithm: SAST + use_gpu: true + epoch_num: 2000 + log_smooth_window: 20 + print_batch_step: 2 + save_model_dir: ./output/det_sast/ + save_epoch_step: 20 + eval_batch_step: 5000 + train_batch_size_per_card: 8 + test_batch_size_per_card: 1 + image_shape: [3, 512, 512] + reader_yml: ./configs/det/det_sast_totaltext_reader.yml + pretrain_weights: ./pretrain_models/ResNet50_vd_ssld_pretrained/ + save_res_path: ./output/det_sast/predicts_sast.txt + checkpoints: + save_inference_dir: + +Architecture: + function: ppocr.modeling.architectures.det_model,DetModel + +Backbone: + function: ppocr.modeling.backbones.det_resnet_vd_sast,ResNet + layers: 50 + +Head: + function: ppocr.modeling.heads.det_sast_head,SASTHead + model_name: large + only_fpn_up: False + # with_cab: False + with_cab: True + +Loss: + function: ppocr.modeling.losses.det_sast_loss,SASTLoss + +Optimizer: + function: ppocr.optimizer,RMSProp + base_lr: 0.001 + decay: + function: piecewise_decay + boundaries: [30000, 50000, 80000, 100000, 150000] + decay_rate: 0.3 + +PostProcess: + function: ppocr.postprocess.sast_postprocess,SASTPostProcess + score_thresh: 0.5 + sample_pts_num: 6 + nms_thresh: 0.2 + expand_scale: 1.2 + shrink_ratio_of_width: 0.2 \ No newline at end of file diff --git a/configs/det/det_sast_icdar15_reader.yml b/configs/det/det_sast_icdar15_reader.yml new file mode 100644 index 0000000000000000000000000000000000000000..1fdea8756c3d80b291aba4afbb8c7d340654b6cb --- /dev/null +++ b/configs/det/det_sast_icdar15_reader.yml @@ -0,0 +1,26 @@ +TrainReader: + reader_function: ppocr.data.det.dataset_traversal,TrainReader + process_function: ppocr.data.det.sast_process,SASTProcessTrain + num_workers: 8 + img_set_dir: ./train_data/ + label_file_path: [./train_data/icdar13/train_label_json.txt, ./train_data/icdar15/train_label_json.txt, ./train_data/icdar17_mlt_latin/train_label_json.txt, ./train_data/coco_text_icdar_4pts/train_label_json.txt] + data_ratio_list: [0.1, 0.45, 0.3, 0.15] + min_crop_side_ratio: 0.3 + min_crop_size: 24 + min_text_size: 4 + max_text_size: 512 + +EvalReader: + reader_function: ppocr.data.det.dataset_traversal,EvalTestReader + process_function: ppocr.data.det.sast_process,SASTProcessTest + img_set_dir: ./train_data/icdar2015/text_localization/ + label_file_path: ./train_data/icdar2015/text_localization/test_icdar2015_label.txt + max_side_len: 1536 + +TestReader: + reader_function: ppocr.data.det.dataset_traversal,EvalTestReader + process_function: ppocr.data.det.sast_process,SASTProcessTest + infer_img: + img_set_dir: ./train_data/icdar2015/text_localization/ + label_file_path: ./train_data/icdar2015/text_localization/test_icdar2015_label.txt + do_eval: True diff --git a/configs/det/det_sast_totaltext_reader.yml b/configs/det/det_sast_totaltext_reader.yml new file mode 100644 index 0000000000000000000000000000000000000000..74c320fcc5e1c8340d0439e350e2a4e88824b16d --- /dev/null +++ b/configs/det/det_sast_totaltext_reader.yml @@ -0,0 +1,24 @@ +TrainReader: + reader_function: ppocr.data.det.dataset_traversal,TrainReader + process_function: ppocr.data.det.sast_process,SASTProcessTrain + num_workers: 8 + img_set_dir: ./train_data/ + label_file_path: [./train_data/art_latin_icdar_14pt/train_no_tt_test/train_label_json.txt, ./train_data/total_text_icdar_14pt/train/train_label_json.txt] + data_ratio_list: [0.5, 0.5] + min_crop_side_ratio: 0.3 + min_crop_size: 24 + min_text_size: 4 + max_text_size: 512 + +EvalReader: + reader_function: ppocr.data.det.dataset_traversal,EvalTestReader + process_function: ppocr.data.det.sast_process,SASTProcessTest + img_set_dir: ./train_data/afs/ + label_file_path: ./train_data/afs/total_text/test_label_json.txt + max_side_len: 768 + +TestReader: + reader_function: ppocr.data.det.dataset_traversal,EvalTestReader + process_function: ppocr.data.det.sast_process,SASTProcessTest + infer_img: + max_side_len: 768 diff --git a/configs/rec/rec_r50fpn_vd_none_srn.yml b/configs/rec/rec_r50fpn_vd_none_srn.yml new file mode 100755 index 0000000000000000000000000000000000000000..7a0f136c28dd967aeb422d843a49cf65b934d7ca --- /dev/null +++ b/configs/rec/rec_r50fpn_vd_none_srn.yml @@ -0,0 +1,49 @@ +Global: + algorithm: SRN + use_gpu: true + epoch_num: 72 + log_smooth_window: 20 + print_batch_step: 10 + save_model_dir: output/rec_pvam_withrotate + save_epoch_step: 1 + eval_batch_step: 8000 + train_batch_size_per_card: 64 + test_batch_size_per_card: 1 + image_shape: [1, 64, 256] + max_text_length: 25 + character_type: en + loss_type: srn + num_heads: 8 + average_window: 0.15 + max_average_window: 15625 + min_average_window: 10000 + reader_yml: ./configs/rec/rec_benchmark_reader.yml + pretrain_weights: + checkpoints: + save_inference_dir: + infer_img: + +Architecture: + function: ppocr.modeling.architectures.rec_model,RecModel + +Backbone: + function: ppocr.modeling.backbones.rec_resnet50_fpn,ResNet + layers: 50 + +Head: + function: ppocr.modeling.heads.rec_srn_all_head,SRNPredict + encoder_type: rnn + num_encoder_TUs: 2 + num_decoder_TUs: 4 + hidden_dims: 512 + SeqRNN: + hidden_size: 256 + +Loss: + function: ppocr.modeling.losses.rec_srn_loss,SRNLoss + +Optimizer: + function: ppocr.optimizer,AdamDecay + base_lr: 0.0001 + beta1: 0.9 + beta2: 0.999 diff --git a/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties b/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties index 578b5482ad45045124272fa3e54d065a77c2eea2..63dac4e990eb3ca985e9c1018c84f26fbab0ac78 100644 --- a/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties +++ b/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties @@ -1,4 +1,4 @@ -#Thu Aug 22 15:05:37 CST 2019 +#Wed Jul 22 23:48:44 CST 2020 distributionBase=GRADLE_USER_HOME distributionPath=wrapper/dists zipStoreBase=GRADLE_USER_HOME diff --git a/doc/doc_ch/config.md b/doc/doc_ch/config.md index 5e57909659c5f6246229902ae138d6cbdd63f133..03fe1b3280881472c830cf5ac57dee183a94b373 100644 --- a/doc/doc_ch/config.md +++ b/doc/doc_ch/config.md @@ -32,6 +32,9 @@ | loss_type | 设置 loss 类型 | ctc | 支持两种loss: ctc / attention | | distort | 设置是否使用数据增强 | false | 设置为true时,将在训练时随机进行扰动,支持的扰动操作可阅读[img_tools.py](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/data/rec/img_tools.py) | | use_space_char | 设置是否识别空格 | false | 仅在 character_type=ch 时支持空格 | +| average_window | ModelAverage优化器中的窗口长度计算比例 | 0.15 | 目前仅应用与SRN | +| max_average_window | 平均值计算窗口长度的最大值 | 15625 | 推荐设置为一轮训练中mini-batchs的数目| +| min_average_window | 平均值计算窗口长度的最小值 | 10000 | \ | | reader_yml | 设置reader配置文件 | ./configs/rec/rec_icdar15_reader.yml | \ | | pretrain_weights | 加载预训练模型路径 | ./pretrain_models/CRNN/best_accuracy | \ | | checkpoints | 加载模型参数路径 | None | 用于中断后加载参数继续训练 | diff --git a/doc/doc_ch/update.md b/doc/doc_ch/update.md index e22d05b4316ebabbb7bb036e1be1b09600538925..1cd7788511c29df8934efe2c1462aaca68c9b92b 100644 --- a/doc/doc_ch/update.md +++ b/doc/doc_ch/update.md @@ -1,4 +1,5 @@ # 更新 +- 2020.8.16 开源文本检测算法[SAST](https://arxiv.org/abs/1908.05498)和文本识别算法[SRN](https://arxiv.org/abs/2003.12294) - 2020.7.23 发布7月21日B站直播课回放和PPT,PaddleOCR开源大礼包全面解读,[获取地址](https://aistudio.baidu.com/aistudio/course/introduce/1519) - 2020.7.15 添加基于EasyEdge和Paddle-Lite的移动端DEMO,支持iOS和Android系统 - 2020.7.15 完善预测部署,添加基于C++预测引擎推理、服务化部署和端侧部署方案,以及超轻量级中文OCR模型预测耗时Benchmark diff --git a/doc/doc_en/update_en.md b/doc/doc_en/update_en.md index ef02d9dbf7a1379d029ca6fcf819a06d5996151b..dc839d8955afcfa2d1efbee5e02d35f384d6c627 100644 --- a/doc/doc_en/update_en.md +++ b/doc/doc_en/update_en.md @@ -1,4 +1,5 @@ # RECENT UPDATES +- 2020.8.16 Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294) - 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519) - 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite) - 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight Chinese OCR model are provided. diff --git a/docker/hubserving/cpu/Dockerfile b/docker/hubserving/cpu/Dockerfile new file mode 100755 index 0000000000000000000000000000000000000000..8404b4c007731fa02816aecfefc2fe3cd7da7c73 --- /dev/null +++ b/docker/hubserving/cpu/Dockerfile @@ -0,0 +1,28 @@ +# Version: 1.0.0 +FROM hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev + +# PaddleOCR base on Python3.7 +RUN pip3.7 install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN python3.7 -m pip install paddlepaddle==1.7.2 -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN pip3.7 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR + +WORKDIR /PaddleOCR + +RUN pip3.7 install -r requirments.txt -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN mkdir -p /PaddleOCR/inference +# Download orc detect model(light version). if you want to change normal version, you can change ch_det_mv3_db_infer to ch_det_r50_vd_db_infer, also remember change det_model_dir in deploy/hubserving/ocr_system/params.py) +ADD https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar /PaddleOCR/inference +RUN tar xf /PaddleOCR/inference/ch_det_mv3_db_infer.tar -C /PaddleOCR/inference + +# Download orc recognition model(light version). If you want to change normal version, you can change ch_rec_mv3_crnn_infer to ch_rec_r34_vd_crnn_enhance_infer, also remember change rec_model_dir in deploy/hubserving/ocr_system/params.py) +ADD https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar /PaddleOCR/inference +RUN tar xf /PaddleOCR/inference/ch_rec_mv3_crnn_infer.tar -C /PaddleOCR/inference + +EXPOSE 8866 + +CMD ["/bin/bash","-c","export PYTHONPATH=. && hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"] \ No newline at end of file diff --git a/docker/hubserving/gpu/Dockerfile b/docker/hubserving/gpu/Dockerfile new file mode 100755 index 0000000000000000000000000000000000000000..1320a7f34b05ee470fa2bcf30548295f11427237 --- /dev/null +++ b/docker/hubserving/gpu/Dockerfile @@ -0,0 +1,28 @@ +# Version: 1.0.0 +FROM hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda10.0-cudnn7-dev + +# PaddleOCR base on Python3.7 +RUN pip3.7 install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN python3.7 -m pip install paddlepaddle-gpu==1.7.2.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN pip3.7 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR + +WORKDIR /home/PaddleOCR + +RUN pip3.7 install -r requirments.txt -i https://pypi.tuna.tsinghua.edu.cn/simple + +RUN mkdir -p /PaddleOCR/inference +# Download orc detect model(light version). if you want to change normal version, you can change ch_det_mv3_db_infer to ch_det_r50_vd_db_infer, also remember change det_model_dir in deploy/hubserving/ocr_system/params.py) +ADD https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar /PaddleOCR/inference +RUN tar xf /PaddleOCR/inference/ch_det_mv3_db_infer.tar -C /PaddleOCR/inference + +# Download orc recognition model(light version). If you want to change normal version, you can change ch_rec_mv3_crnn_infer to ch_rec_r34_vd_crnn_enhance_infer, also remember change rec_model_dir in deploy/hubserving/ocr_system/params.py) +ADD https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar /PaddleOCR/inference +RUN tar xf /PaddleOCR/inference/ch_rec_mv3_crnn_infer.tar -C /PaddleOCR/inference + +EXPOSE 8866 + +CMD ["/bin/bash","-c","export PYTHONPATH=. && hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"] \ No newline at end of file diff --git a/docker/hubserving/readme.md b/docker/hubserving/readme.md new file mode 100644 index 0000000000000000000000000000000000000000..109e6aa6a536c146095b8f46516b1c895dc08337 --- /dev/null +++ b/docker/hubserving/readme.md @@ -0,0 +1,55 @@ +# Docker化部署服务 +在日常项目应用中,相信大家一般都会希望能通过Docker技术,把PaddleOCR服务打包成一个镜像,以便在Docker或k8s环境里,快速发布上线使用。 + +本文将提供一些标准化的代码来实现这样的目标。大家通过如下步骤可以把PaddleOCR项目快速发布成可调用的Restful API服务。(目前暂时先实现了基于HubServing模式的部署,后续作者计划增加PaddleServing模式的部署) + +## 1.实施前提准备 + +需要先完成如下基本组件的安装: +a. Docker环境 +b. 显卡驱动和CUDA 10.0+(GPU) +c. NVIDIA Container Toolkit(GPU,Docker 19.03以上版本可以跳过此步) +d. cuDNN 7.6+(GPU) + +## 2.制作镜像 +a.下载PaddleOCR项目代码 +``` +git clone https://github.com/PaddlePaddle/PaddleOCR.git +``` +b.切换至Dockerfile目录(注:需要区分cpu或gpu版本,下文以cpu为例,gpu版本需要替换一下关键字即可) +``` +cd docker/cpu +``` +c.生成镜像 +``` +docker build -t paddleocr:cpu . +``` + +## 3.启动Docker容器 +a. CPU 版本 +``` +sudo docker run -dp 8866:8866 --name paddle_ocr paddleocr:cpu +``` +b. GPU 版本 (通过NVIDIA Container Toolkit) +``` +sudo nvidia-docker run -dp 8866:8866 --name paddle_ocr paddleocr:gpu +``` +c. GPU 版本 (Docker 19.03以上版本,可以直接用如下命令) +``` +sudo docker run -dp 8866:8866 --gpus all --name paddle_ocr paddleocr:gpu +``` +d. 检查服务运行情况(出现:Successfully installed ocr_system和Running on http://0.0.0.0:8866/等信息,表示运行成功) +``` +docker logs -f paddle_ocr +``` + +## 4.测试服务 +a. 计算待识别图片的Base64编码(如果只是测试一下效果,可以通过免费的在线工具实现,如:http://tool.chinaz.com/tools/imgtobase/) +b. 发送服务请求(可参见sample_request.txt中的值) +``` +curl -H "Content-Type:application/json" -X POST --data "{\"images\": [\"填入图片Base64编码(需要删除'data:image/jpg;base64,')\"]}" http://localhost:8866/predict/ocr_system +``` +c. 返回结果(如果调用成功,会返回如下结果) +``` +{"msg":"","results":[[{"confidence":0.8403433561325073,"text":"约定","text_region":[[345,377],[641,390],[634,540],[339,528]]},{"confidence":0.8131805658340454,"text":"最终相遇","text_region":[[356,532],[624,530],[624,596],[356,598]]}]],"status":"0"} +``` diff --git a/docker/hubserving/sample_request.txt b/docker/hubserving/sample_request.txt new file mode 100644 index 0000000000000000000000000000000000000000..ec2b25b1f54cb7d7cdaeaff6162810a5772442c6 --- /dev/null +++ b/docker/hubserving/sample_request.txt @@ -0,0 +1 @@ +curl -H "Content-Type:application/json" -X POST --data "{\"images\": [\"/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA4KCwwLCQ4MCwwQDw4RFSMXFRMTFSsfIRojMy02NTItMTA4P1FFODxNPTAxRmBHTVRWW1xbN0RjamNYalFZW1f/2wBDAQ8QEBUSFSkXFylXOjE6V1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1f/wAARCAQABAADASIAAhEBAxEB/8QAGwAAAgMBAQEAAAAAAAAAAAAAAgMBBAUABgf/xABQEAACAgEDAwIDBAUGDAQEBQUAAQIRAwQhMQUSQVFhEyJxBoGR0RQyobHBIzNykpPhBxUkNEJERVJTVHODFkNV8CZWYoIlNTZjZEZ0hKPx/8QAGgEAAwEBAQEAAAAAAAAAAAAAAAECAwQFBv/EACkRAQEBAQADAQACAwEAAQUBAQABAhEDITESBEEiMlETYRQjQlJxsTP/2gAMAwEAAhEDEQA/AKOnzcbmlgzXW55zT5uNzT0+bjc83U4hvY8nuW8c0zHw5rS3L2HJujMNFPYmxMJWg7GR0HvuVNRDdquCxF7kZ4XuvKAMLVY7tUVI4anwa+fFu9iusSvgV+KiMa7YGL13NbhhT5ds3pqo36HktbN5tZklyk6X3D8WffT17VaaYaQxwphKGxv0i2rVCpfK6Za7Cvq40k/uHAX3EOfuKv3Och8A2wHIFyAlIqQddOQlu2S22yC4TjjjhhJyOOEEnEIkCSiUiEEhBKCQKQSJCUSuSCUInN0mLDm96AGbjjjgDgWERW4whIOKshIZBbitB2NFjGIgixAw0Z0CxDgrwHxexlTOTCsUmc5UQDHPbkVkyKEW2wZTSKOpzW+1P6l5z2laGWRzm5NhwnXkrJhpm1iV/HPjct4sleTMxzqi1jmZ6NpwmmWIOzNxz9y5jyGZrkRqK8JpjUyaqHI6iEyb2M6bm6CixTZ0WR0lzG7QyyrCVDVO/I5oLMJbDk7KkZUx8JJh01iA5MTDgaiouCTObOvYBvcKBWdYFnNkmJvYVJhNipy2JpFZJbFDNO5clnNOkzOnO5vc59+6ztOvYBsFSBcgxB0TYtshsBs3kHXSdiJ7ByYmcjWAExE2HNipM1kJXzCo8jcu6FpUyiXMD3NTA9kZGF7mnp3sjm8kONPC+C5BlDC+C9jdnHqNIswQ6IqHCHx4M+KEkEkciUPgcQ+AgWIAYDGMBoCA42LcRr2OaTHKSu0RQ5oGhkXRFDaIaKlBbiA4jqIaNpSIaIoa0DRtmpsDVHJBUC0dGamxxBJ1GvSC0QG0RQupRRFBUTQugNEUHR1FSgANBNUQaQnEM5ugGyglshsFshsmnwae4aYhS3GRkZbpmp7HNgp7HNnNrS3zrDkprc0NPm43MeEuC1hyNNbno6gehwZeNzQwZLrcwdPlutzS0+Tjc57A9BhlaHWUNLktLc0YpNCgRF7jm7iV5pwe/AUZ3tYEHLjTXAhYty09yEtxU4z9fF49NOS5rY8qsLTuS3PZa+HdBQ/Ex8mmp8F59KY6wtu2F8L2NB4WvBHwt+CujjPljrwUteqw37m1PFtwZHWEowjC927Kz7pa+MruIciGjjckNgu2FR1MACjqD7SaDoLo6hnbsQ4h0AOCcQaoA5EojclASUEgUEhBKCQKCEEkohHN0mIAbt2cccMOOOOAOOOCSAOirHRREIjkiLQ6KHwQuKGpGVMxDE6FoKzOgxPYiU6QtypCcmQJB12bLSdMpttu35CyTbYKN8ziUoJEIJIAOLLGOZWQyLoihexzLMJ0Z8JlnHMzsNoY8nuWYTvyZ2OZYx5K8kVXV+MrDcqRTWWhinfky0Omt2TF0wE7CT3MdULC4sJMVB7DER04YmPxt2IXI/Gi801uD2GpiYvYNM2izGwWyLIbGYrIbBsFsQS2JyT2JnKkVs2Sk9yLE1X1WSk9zP77Yerzcqyl37mX56ytXFP3Ocr8laOQPv2CZ9jprkC2LcyHI1kHUylsJkwmxTZrIYJMU2HJsVJmkIufIJ0mRYyNxumaGnnwZcXTLmDJvTMtzo62sMuC/hfBlaed0aWF7I49Z9tJV/Gx8HsVcbHwZncrPiEAmEmLhpOOIYcJDBYTBZNgC0DQTIognNWC4hE2PoKaOSGtENDlBbiC0NohqzaaIhoBrcc0BJGkpF0QyXsC2b50moOObORvKTqJolIJIOp4XRNDO06hdHAVQLCYLdFSjgGwWyWxbkayk5sW5EOQqU9y+gbkC5iXP3AeT3JtCx37hxmUXk9w8eUy2caCmS5bFWOT3GKVo4t3i4+dRssYk7FY4Nst4sb2PZ1SPwtpF/BNpoq4sbos44NM57TbWlyVW5r6fKmt2edwTUUrdF/DnT4ZAbj7ZR5TF9lPZmf8RuqZYx5pPZsOhYpolbOzoST/AFnRLj8radgSvlanJ7rbaivPGmMyY3bdNCX3xezH05S5YUxcsKQyWSa5Qmed8dodPpc4KMW3skjzHUJvPncktlsjc1eTJki48LyZk9P7DzvhX3WRLG0LcaNPJgrwVZ46fBtN9CrQSQbhTOUSukFRJURiiEok9BPac4FhQvwc4C/QVnGgHEtOApwKmgrtHUNcfYGi+gCCOSJoCcggUEIOOk9qOIfIBBxxwBxx1EgHJWMhHyRFWx8I7EWhKQaRyQxRM7TdFDEjkqCSJod4OJIeyJBeRlfJLahuR0ys3bNMxKDkckEiw5INApBIkJRKZCJQgbGVDYzEIJWibCW45GPhkooRkx8JEWGvQnbLOOZn457lrHKzHRrkXYxMrxkNi7ObRn43vQ5FeDpliLtEHDI8lrEitBblrHwaZVDkGmAgrNosVnNkENjCGwG6Jk6FTlQAE5bFDU5KTLU5bGZq50mHEarN1Oe5tWKWS/InK7yP6nJOx/mcYrSmMUyvBMakyOGb3HNgKwqsqKQ3YErGNASRcBMuBU2OkhMlsUCWwLDkqFspJkWOg2mmV0w1ImwNbS5baRsad2jzukb70b+nfyHNrPteV+DLEGU8b3LEGZ3LSLSYaYiLGJkfkzbOsFM6xWBzIfBNkMz1CCzjmcY0Oo6jiaEEJBpJnJBJDgKaBdoe0hbRYLaT4FyQxqhba8mmdETJC2Pkr4EyVG2dEg5AhI3mk0cdw6Big0i+k6iGgqIfAdBbQqew6QmfDDoJlITKZOR0ytkyV5LmiFKdCMmRIDJkq9ynky+5pNEfLN7inm9yrLMr5FPMvUq0Ljy7hwy7rcz/AIy8BwyNmej418eS+CxjkZeHIXMU7OLyQ48xjw+xdw4XtsOw6e62LsMKiraPSulEY8Xqthk6gtuQsk1BUt2Iacnb3IItzld2WdPntpN0xLx2gFFxewybunm3yy5H1T3MrSTlSVmhCTrcztNa76QKyu9mIlOlyBHKm+SoVi58RtU3YLVgQnY1NMVIl478CZ4b8F5RTBeOybQy8mBVwVMmn9jani9ivkw+xn+jYWTD7FLNhq9jdzYavYz80N2VnZsbJjp8AUXs2PkquNM6c67ABIfhwTyzUYpts7DieSSSVtnrugdH3U8kd36i1r/hydVNB9nZZIJuLbZQ6z02WgyxjKLXdxa5PoGs1uj6RpO/PNR22S3cn6JeWfPuqdRy9T1Us+VNK6hC/wBVen1J+fT1yeoy3ABx9iw4guJUqFSUPYW4FxxFygXNEquNHUOcAXD2L6CqOoY4kND6AENbhtEND6A0cTRNAAhJHJDIRthaE44eSwokQjSGxRjablEYokqIxIi0wpHUGkdRIA1QEnsMkqFTajFtjhKuaW9IUiW3KTbOSN56iXJEpHJBJCDkgkiEgiScSkQkGkAckEkcgkiQlIbFC0hsUTQbj8FrGV4ItY0c+zh8EPiheNFiK2ObSkpDoOgEglsR01jG9y1B7FLGyzGZrk4sJhJiVIJM2ipTbObAshsqKRNiJsKctxWR7BwrSsktmZOtns0aOSWzMnVu20NnpnqLbHQh7BQx+xYhAKgEMfsNWO/A7Hj24Hxxkq4qrES8deC4sfsc8fsB8UXBguBeeP2FSx+xQ4oyh7CJw5L84CZwKg4zskaENUy/khyUsiouJpbdDIO2Ka3DxrcdhNHRr5l9Tf06+RGFol8yN/AqgjC/VZPWzHQYpKxiTTJuWkWIsNMTGxqIuTNTJTATCRnYBWcyES+DHUAWcjmckc1CUgkjkgkqAOSDRCRI4ESFy4GNC5IsEzFSHSFSHAW20C6YTFsvN4kLjTOQSd7M6lyjaaI2C2CoHGxlGsoRQLQdENFygiXInI+R0+RE+BwKefhmdmyNNps0c3DMjVurKiScuar3KWbNV7gZsrbaRWdt22b5z/0CllbewKbb3ZFEpGhGxY6DqhER0DPQW8bplzDIoY2W8T2OXcNchiUFwBkl4Q+bb2SpCnD2N+qV+ze2d2V4LHaC4h0EqByxptUhqQ3DFNitIenx9tbF1R22QuMapFrHC0QqRnarI4Qe+7K2PM7VssdUxytNLZclCCaNp8TfrUxZdluWoZLMrHJplrHNqiaTShJMYnZTxz9yxCRjSMaTQrJBU9hqdoGStGdps7PDnYzNRjpvY280dmZuqhswl9mxMqqys4OU6SLmog+50i70npstTlTadWdUvIa10HpTyTWSa29z1up1Gm6RoJZ8u0YrZLlvwl7k6XT4tJpnKbUYwVtvZJep4XrvVp9U1dptafG2sSfn1bXv49hyf2q38xV1+tz9S1ctTqXcntGN2oL0RXSOSCSFagNENDKIaDoJaAaHtAtFSkQ4gOJYcQXEqaCu4kND3EFoqaBDQDQ9oBouUi69jqDaOSH00KNsdCFeDoQ3Hxj7Ea0ERiOiiFEYkZ2mlIYlsckFRBho5oLgGTSAiplLUZLfYnsuR2py9qaT3KSbbtm2J/abXIJI5IJFUnJEpHJEiCTkcgkhE5IJIhIJLYQSluEkckEkSEpDIIFIbBbk0HQRaxoTjRaxo59qh+NbFiKFQWw6K2OeriUgqJSJog3R2Y5MUluNRpgjExiYpOkSmbwzb2BlLYGwZPYo+hbsXke24disjVDJVzSqzOyK5F/Km2yp23IEUMIew+MPYmER0YioTCGy2LEYX4OxxVD4QEotY/Y5wvwWVDbg5wvwUao8fsKlD2Lzxi5Y/YAz54yvOHsaM8fOxXyQ9ioGZmhSexn5IbtmxmhaM/JCmXEVTcfY6Cpj5QoFKmFqWhoo7o3cK2Rj6GPBtYVwjFeViEdhiR0VsGluNaUg0iIxGpE0wpBJBUdRjonENhEMw3QElIitw4o5DEkEkckShwOo444oOfAuQbYtjBUhMuR0hUg6CmwGrDaA4H1IeAkyaOoP1wCT3Hp2hCDg6Ztjy/8ASMpkNDFUlaOaOrNlNWmitkVJl2aKuWDa4NIGbn4dGXqYOVm3lx87FLNiu9jSJ485nxU2ys0bWow2nsZmSDjJqjXNIijg2qBrcoCiNiLSoZEikfB0yzjfBUgyzjfBhuBuLH7AyxpIa3QuUti1lNJITOhk5CXK2BO8jcTp2KW7HQVIVC3jkm0XcdJXZmwaTsa8m1W0iOrl4nWtTtLcqw06rdBzyJKxa1K8sf6TRPAqOimmHHLGXkaqfoy+hEG0yxjlYtY74DUWmTqdLiwmSxaYzwZWEVNJplDUwtOjRkrRXyY3J8DmVRkYdI8+dRSu2ey6ZoIabCtt/JW6ToFB/Ekt3x7Iq/avrMtLheh0cqzzXzzX/lxfp7v9i39DfMV89sv7VdcWpnLp+kn/ACMZVlkntNrwvZPn1Z5pMDjaqoJMqo6NBoWmEmSBHHJnEhzQLQdHUPoLohxG0dQdBLiA4lhxAcSpolZxBase4gOPsXKCGgoxsY4B44Dug6EKSHKHsFGA1RM7TAoBpUEokpC6YUibCoFoAGToTOTpjZIp6vJ2R7VyyszpX0q5p903T2QKQKDRv8QlBpbkJBJEhKRyRKQSQiQkciUgkhdDkgkjkg4xJDkhiiSojIxEaFGxuOBMYDYImgcIljGhcEWIIx0o2CHRFRQ6JhVQaCohBGdNCW4aBC4ReQmzkwG9zrNoRtgzlQNsCVtlw0uQqbbGJAzVIoqrSVpi1DexvNhRhYkgjEZGIyOMaoewK47Ei1COwmEaZaxoUNyjvwF2hpBKIzIcAJQLTj7C5RHw1HJArTgaGSHsVskByEoZIWihkx7vY1skCrlx+xSaypwoUo7lzNDfgRGHzfeTWdaWijsjVxqqM/RRqjSiuCGmfi5CmhiW4nG6SLEGnVk3ShRiNUSYpDEkjK6kBbVA0MaIaMtb6AUQwmRRhrXQFLcNI5INIyNxJBIwg5nENgAtgSYTYEgAHyLkMYth0i2gaGtEULpBUSVENIlInpg7DlEakT2ocoBBtMekpK0LqgoumdHi8nPRBlATOBbaTVoVKJ35oUMmO72KeXHzsauSBUyw5NYGPnxWnsZGrw1bSPR5cfsZerxWnsXKmsJrwCkOyxqbQuty5UuSDRCVEoVBkfBYxvgrofj8GOibkpipz2Ankor5MteTSQ+jyZAE7YjubZYwxsL6B+ONjkjscNixHGZWrhShsc00h7SSM3W61Y7hDd/uCZ6fqJ1GRQg02rZQedXyVc+bJkk3Jv2RWlOSNJ4kVqLVKPL/AGljDrk3s9jzk5Sb3ZOPI4PZtF/+U4HssGrT8l3HkjI8jg1UlW5qafWN7N0Z3FhvQpJ8M5ppbFDDqU1yWo5b4ZHDFfhj9Ph75q1e4uEPiSSSNTDCGDE8mRpJK234KzBBarOtDpHKKTyNVBer9/ZHjdRpp5JynNuc5tuUny2/J6DJqFrMrk9vCXohctKpb0jVX15LUaBrdL9hmzjLHJpqj3GXRpqqMbqHTbi2luLvEWMBMNMVkjLFNwlymdGVjsI9MNMUmMTIpiRKR0QkRQ5IlIlImhdAaAcR1ENB0KziA4lhxBcC5QSoWx0YUgoQGKIroBURiRKiEkLphSCoJI5oYAyJNUS9hc2OArNNRi2/Bk5JvJNt/cWNblbfYn9SqkdGJydRUxDSBSGJFUhJBJEJDEiLQlIlRCSCSI6AqJKiFQSQugKiMSOSGxjYrQiMR0YnRikNSEaEhkUcosbGJNppgixBC4RHRRnQYkGgY8BrkyqhoIFMkzsNzYSewts5PcIQvJJyVnUbQOOaOONIEeAJvZjKsCadDCuluPxrgXGLssQg9mMGxhaDUAsK3pof2Kg4ZKhTHwQPbvY3Gtw4BVuHGNnVuMithqLa2FSiWGgGhhVnErzgXZRETiPhKU4lXLDk0JoqZY8gTLzQ3ERh8xeywFQhvwRUVc0kWki9FcFbTxpFuKEqH4VaofFOxGLZotRRl5J1Y436jVwBFDEji1CdRDCBlwSAs6iLCTFZ0nJEnHEqSccQ2qEHNgNnNgtgENgtnNgtkkhgslsgXQiiUjqCS3AnJEpHJBJAaEiUjkgkgAWiBlbANDl4B434YUoilsyxH5o+56Hh32cJWyQK2SBflEr5IbHXAzMsPYz9RjtPY2MkFuUc0Nmi4VeZ1mOm3RUo2dbj2exjS2bTKiHJEpEJhJhQKI7EITHY3uZ6JbyZBDk2yJO2TCLb4NgZii2y/gxitPiutjSw4qXBlqrkFjhQ9RSW4UIJIr6vMscaT3fCI4v4r67UKEe2D3f7DFnBuTb3b5Ls25ybe7YLha4NZ6SzsmPbgrzh7GpPHsynkh7FSkz5woU0XMkBEo+xpKSMWRp1Zfw5OChDDOc6Sb+42tB0rPmaSg37JWTvUh8Nw5Xao1dI5zaVb2O03Qpxp5HGH1e/4I19P07HiaeOSk63/uMf1KfB6PAopNoo9S16yZXgxv8Ak4um15fp9w/q2remwLBhdZJrleF5ZhQT2K+CrsObWxpaWamkmZmHwX8Kppi/RxfenTVoo6rSJp7GrgdxSYWTEpLgOr518765oGk5xVNbnn4y9z6P1bSKWKSrlM+cZ4PHnnGuJNF4vfTKzlNjIbGRVixsZDuSWkw0yvGQ1MysM9MJCkw0zOwDpENHJnNiAWiErZLDih9DlEJRCSCSF00JBJEpBUUYKIaDaAeyHATIq6jIoQbfJan5MnWZHOdJ7I0xO1NV23KTb5ZyRyQSR0dJyQxEJBpEkJIZFAxQ2KM7QlINIlIJIkBo6hiRPaLpIih0UDBDlERuihkYnRiNSJ6aYwGxidBDEhG5INIhIYkTQlIk6jjOmlOie8GjqM6QnNe5yasVLYFSpkdLq0mSwIu0d3GuaYyVuL7g47s1hjSO7LDhG1wNjEuAhYqd0OjBDFAJRrwUYVGmPS2AS3GpbIfACgocktHRVMDMrcYlsAuQ0hhzQLQxoFoZkyQjIi1JCJoCVZorZY3ZelEr5IipM3JACEN+C1khuRjhuRU03FGkPigYRpDYoSoKOzsuY90VUixifCFqdij0g0tgUrDOTcCHwC+AmC+DCkU3TJTIkAnuIj07JATDsVhuIbJbAbIMLYDYTYDYqSG6BbObIJDjkcSlYglINI5IJIAGgkiaOGEUSjjkhhKRDVhImrQGVVMZjdMFomDpm3i1ylTpRtCMkS0t4iprk9TJKGSPJTzQNPJHkp5YqmawMPW4009jz2ojU2er1ULTR5vX46k2kOfWdUkwkxaYSdDpGJ2NxvdCUxkHuiNA9Jt0XMGFunQOHDbto0sOKkth60cgsGKq2L2KFLgHFj9hzahG2ZNJAZprHDky815JNssZZvLPnZcAONlT0FXs9jnDYsOCSAmkkMKs1syllStl3I+SnNNtlJqrODb2LGj6bk1M0oxbv2Lmg0Pxp3LaK5PTabHDDjUMcUvV+WZeTzfn1DkVND0LBp0pZvml6J/vZqRrGuzHFQj6LYNJqF+oF03fhHHrd0sUWlswlJpqmKW6JttilAs2n0+pd5ItT47k939RD6ZhrbI0/eP5D1LcNyvdfgbTdKqa0Tg9pRa9nX7xuODhVtfiFJ7i75L/AHSXMWaMOWizHVYmqt/gZKYakH7sOVc1EIaiDjFpv04PAdZ6NnwanLN432ttp0e5g73sbUckHjyRU4PamrQTzXNFnb18kljcHTRKdHuOtfZmM4yz6JXStw8/3nismN48jg0006aZ153Nz0mzgosbFiExkWFiVhMNMQmGmZWGcmTYtMJOyeAa3YyKBghqWxFMSQSWxCQSQzckTRKRzHAFi5cDGLlsrKgVdTPsg65ZlSTbt+S5qJOc36IS4extn0XCVE5RGqBKgV+i4BIYkSo+wxRJ6XERQ2KISGJEhKQSRyQaQg6jq2CSCS9hEGCpliC2FxjuOhsIxpBpWQqYaW5IMihiQMEMSBUcluGkQkFZNCTkjluSkkjOmitgWE2A2Z1IZMU3TDkwGZkfjlaDe6K+N0x1muKHDMbdi7GQdM2hruGmhqjuV8Mi1FplxUSlsSkEcluWHKNsZVI6KCZRhaQKW4TOXIASW41IBLcYhhzRDQdEUBlSQqSLDQqSAK0kIyRLbQmcdhFVOcPYiENx8obnRgTYTlGkGkEkSkTwOSHQ5QpKhsOSapZg9ggIMMw3AhgvgIFo5bCLmthfDGtCmqYiHFhp7C4sO0KhLYDZNgtmdMLYDZLYLZJIbOOOSJDkhqQKQ1IIbktiaJomigGjqCaIAIomjqOoDcEuAUShwOaBWzDkC+TTP0U7G9iZoDGxr3R6XjvYlVyRKmSHOxoSiV8kPY3gZOox2mef6lg2bSZ6vNju9jI1uDuTVchU2PIuDTJpl3NgcZvYV8Pfgf6QQkxsE7C7A1CmTaG7hw1Tou48fGx2LHxsWoQSVsn61kcoqCtlLUZXOTiuEx2pzf6EXv5KqjuMV0IBNJBpUrBkAKkV8jHzYie7oCtV5pt0kN0+geRqUlS9SxpdL3y7pLZGpCKpJKktkjPfk58JXw4VBUlVcGhhg2k3x6iEqkki3Dwlx4OXV6qGtqkk7oTJ0n6tpf8Av8B3lX4FZIUt/WyDCpUqOnJKKpbt8+iBStr08nNqTTTHPQFB7DE1W7S+oreqRL3W/gqUCat7CpNXQabi7TByKm2zWeyAEmqpANtkwTKI/FzbZZTpor49hyd72RqdUen9x8++12mjh65kcIpLJCOSlxbVP9qPoFppU79TxX2wTfVoeawR/ezX+P6tgvx5dbMNM6caYKdHX9ZnJhpiE6GKRNhnJjI8iIvcfBbGd9A6I5IVAcjPhjQUQUEhmkhknMDAyvqZ1ClyyzLZNlDK3ObfjwOBX7LOcByj7EuGxX6HFfsJUBvYEoh+iLUAuwYo7E9vsL9ClJBpE9pKVDlQJINIhDEgDlENQ2Jig0g6ApUSlRLVHAQkxikJQaewjWISHJlSLpjovYRnWSmLTCuvJJmJktinM5zIpdE2C2C3ZDZnSQ2QzmyGZ0kxe41OxCdBplZoOTCT3Fp2EnubQ1nFOmi7CdpGdBlnHOjTI7xcTDiyvGVjlI0V05NBXYpMK9ijSyUgVuw4oOgcUNSASDQ1RNHUSiSjLaFtD2hbQAhoVJFloVNCKqrRKiMcdzkiEho6g6IoQQFF7kM5OmTVHxYxMTB7IamYahDIfBKOZz6gKaFyW45oXJGJAiFYLRAf0BNgtnWC2Z00NkEPkkikhchpEJBpCMUUGlsQkGijdRKRxwwhkBUdQwA4JgsOE45MiyU7HIBWC3ucCzSQxRdMsxdopp0yzhdqjr8N/pImthU4WWKBcDthqGSHsUNRhtPY2Z47KmXFa4Ck8zqtMm26KE8FeD0uowXexm5sG/BlfSLGO8dHLHuXp4a8AfC9iLpPHosUElYGoy9ipcsLLkWODKVucm2ateoSbdvew0g4w2CcaAAapCpDZCpACp8k4cDm7a2XLCjBzkkkXsUKSgl9fcz3vkLjseNNJJUlwOhBxW/kOMFBK02wlb3s5e9+mrJNNvyWcLSSTFZF89LikxsE2tgohydsjLODpKS2VMVKV7J7fvBeyF8PostRikmnau16C4LYnLVQS5r+IMW6YX6Q7aYSdqxSm3yEpMQG3aoGdtv8Ak4xdy2fhLkU8iT4b+81yEqL9AorcLFOE0+U0cpK9lRqRidLZBJ3yAmgkrYudM2LPLfaPF8fqM5x3UYRh+C/vPTO4Qb8+DD1OFubb3bbdmmJ+Q8nmwSTexVnBp8Hpc2nTvayll0ifKN5SYlkpl7Jo0m6Ql6emV2JRj3LURUcbQ2KoyvszoDkKhwNiSoaCQKJEBHAnN0m34BUJ1E6VJ7srpBSl3zbJSEEKJPaMSJUbJtMnsCUB3Z7Hdov0RXaT2jVAJQCaKq7iC1Q9xFtFSooE9xsGBQUUaEfFoZETENMXAN8AnWcMOCTAs6wI1NBqVFbuZEs3Zt5FwdXHkSRHxGygszb3YxZLDhdXFMLvKin7jFMzsPqx3HWKUgk7MaBnHI4zoQ0ctmS0dQ8gSYabYuIcTbIMi6HQl7iIhxZtIS1Gb9R8JtlOLofBmkHVqMg+5sRGXgbHkVq4bEbFiUw4sjqj0GhSYSkaSnDVySAmGmaRSGrBaGAtACmhc0OaAkhAhrclLYOiKEktoig2gGqJoCwWwpAMimbCXA6LKkHuWIOzPRVYRzIjwSc2wFoBoYwWjnoKaoAZJbC2IkMFslsBtEUJOBslMgDSGRQtMYnuBmJBIFMIo3HEnDCDiSG6AIYDZLe4LY4SLJTBIsuEbdgs5M5lHAN7ljC9xD5GYnTRv4rypq6laJ7SIMNLY9HM7FwtwEZMfOxcaAlHYdgZWbF7Gfnw87G5lgmUs2PnYy1E1h5MVXsV3jp8GtmxV4Ks8e5z6RSp5Hlnb48DMa3EQRZgjoVD4LYlrY6DpHSaAyZIU4tvYbJ2M08E5W+EK3kBunwqMG3y/2B6dVl39Tm6V8K6OTcZJ+5y6ttB7VvcjjwMcGla45FN2zO+jBNXNP1SGNqEK8vn6HOOyk/Gwmc23uvvKpJbV7uvY5StpeoMVb3Z04yim0t3shSdI2MHkba8C5PtlXk7DknFdtk0q726Xq/I+AuMXJ7Ic6ikk7fqLU1dJUv3k3u0KhNb292DNWvcN290RV7tbFZFRh+WL9WGuQU/YZFbbmsAluPhG2LgrfFDmnBG2MdMGZOrW6Xgz9RFNX5RpKaunwVtVhTTcXya3IZMsalwJyae72Cc5Y5tNPZj8eSMlyhQRl5tM6exUlgab2N+UItFXNiVPgo+MSWOmQolvNBJiWiKkMUNQCW4aJMaOORDaFwJvcRqMlLsXnkPJNQi23wUXNzm2/IcB0ENSFwHRVkWrElYxROhEcomN0C1AnsHKJ3YR+gUoBdtIYoexLjsOUK00qEtIsTRXa3Nc1nQ0ckjmBbTNoR8Q1wIjJjFMZDbIbBsFtoAO9iLBTsDJkUFtyPhW8FkyKCpbsrNtu2C5Nu3ySh8SKwlJgpBJADYzGxkISGRM9GsRY2IiDHxZhRDEEgUGkZm6jqCo6ghhSDRFHV4N8lRpoJMWkMijaQjFwNi6FKkglIslrG99xyZShPcepoy3VSrHfQSmVXMF5kvJh+l9X1k2CU7ZmrUJurHQzJ+TfOhLGlCVjYso48ifks452kbSrlWCHuCmTZZoaAaDbRD3EC6IaDoholJTQDGtC2hUFSFsa0LaM6YU6Y/Gyve47GzPRLUWGKg9hlnLuhwLCIZhQW/IqWw1oTk2ERcnQDkROW4mUyKDu73OUiv8QlZERyl1bUhiZUjMbGYlLaYaZWUw1MfTPtHX7iu8jvK6DWwXIW5kOQdA2wWyLOsqJcccSkXA5BN7EJEtbC6YXyFB00C0FDZm3jvtK7B7IchGN7Icmer4/ioIhok42s9Gr5EVcsbTZemtirkXJjqCs7MuSpNJ8ov5o8lLIqbObeWdZ0Gh8GinjmPjP3NuHKtKdIBzFd69QXPfkAcnbSLeOPbD3Ken+af0LabSZj5aYnNdiTfDGpwcLvcqZG+2l6jdO22kzEursZpQ7WrdKmKa+ZUdGXdJvx4JnLsV+Xt9xP2mjJNJJK6X7xE5Jpvcmc2luhE8ndshptNhk4SSLGO8kHa4K2KLpbDpzah2wdp8jgA3HG3St/sQuTc2m3Zz7m9w4xbVJWxhCTfgNQk+FwNjCKVPd+gxb7cL0QvUPhaSiqk69lucnF7U6GSSfhEpKg6YKjXyp37kxg29wkqdoKN3uP9A1VDC37pHKSkqFa2fZom/LaX7Srh1F+Tq8e5JylVjIux+xDfdBq/ATmpwabKiy9mVwb54NrofWNqpzx6ia5V7CPjtO0qZe1+LubaW5nVZERPXo9a2aW6sDJre5U4NA9guWL2BcJyZO5tgW2xjx0R2UKmhINIhIIkIbpC5S3CkyvmyKEGx8K0jV5rfYn9ReMQm5zbfksY1wVZyFKt41wWYIrYy3i8HLtcNhEcogwQ+KOfVUFRontQyiKI6ApAyVINgT4KlKq8xEkPn5ETNsopTAb3DkKkdGUiTDTFRCTKI1MGb2ITsHNNRj7gKGeRQW27ENtu3yA2222SmXEDXIxICCsfCNgblEJQYyMBij7CMlRDSGdm3ALVE0xQHQER5Hw8GGoDojEgIIdFbGVORCRKQVAt0gydgXSIvc57s7g6MoqUEmwUworc3hDTbCSbOirGRiTdAKTRNtDO0hxMdU5CpZJJFfJkk/JZnEr5ImR8VpZJJ2mNw6tppNiZoS072LiL6b+n1CaW5fxZb8nmNPmlBpXsa+n1KaW5tnTTOmzGdh91+SljyJpbjlkXqbdayn2dYrvXqSpr1AdMRDYKkc2InMBom7OfBNoKkqEyHyEyMrQW9gsbpgMhSp0ZapLsGNTsq45jlKzk3ozbIZydokzMEivk8lmRWy+Q4lUyvkp5J0WMzasoZ5tDmb1Fovjb8hLKn5M95KfBHx2vBp/5o/TWjl9x0cl+TGjqq5Y6GqT8oz14ac02Fl9wllRlx1Ka5GLUJ+TK+Ormml8U74vuZ36QvU741+Q/FP9NFZPclTvyUI5vcbCdl4wOrilYaZXhKx8WVYY0GlYMRkUIOSJa2CS2IfAqotkLZhSIXJt4/qKuY+ENTFYv1UG9j1cfDhqOATCs36oM+CtNFmb2K8zHVCplXJRzR3NDIU80bsx37RXmoTXqMWTbkpKYXxPc34jq58S/J3xPcqfE9zlk9xcHWxoZKWSvVFzJ8sbMfQ5Wsya9TXlNThsc3lnKuUt7pb2m2PxxqDa54FY43Gq3THxi6S8WZUjcKVW/HIOR97brYJukor7xbe/JBgcbdNELCk+5ukOSSVtANPI+H9C4XC8k7+WCpefcLGm1XqMjihHebS9vIfeo/qKvd8itg46GBtW9l7hNKCqK29Xyxbm5Ldslzn4b+/cP1DTzvwSpNHQkmt4/hsEpwXMH+IumG29mHBMJODWyd+50Y27YreBMUm95V9UM7Uqpp36Edqex0mscHKTSS5HL01LqmbaGBPj5n/AoY5tPkXqM/xs8p3s3t9BayUzPW7dItaWPI/UVqotpTXKEY83uP8AiKSps68atg6VKay4/fyUMmOpMtbwyNeGdkgmrRvCVlElwtBpBUBxWljFOHsXXEXKHsKqUmqBeyH5IUV5sQLk9jN1eXul2J7LkuajIscG2/BlW5Sbfk0zEWmY0WcYjGixBcE6Czj8FvG+Cpj8FrFycu1SrcCxErY2WE9jl0rojjrOJHUMXMY+BOQrJEzYie46YmR05iCZeRbe4cxfk6ISbOsglIYSnSsTmlfLGTdIrZHbCFQ2EmAuQ48lkdBF3FC0injW6L+DdCpmKG3BPb7FiMLWxPwn6CCuoWBNUy44Utyrme7QqZS5LGNW0V4clrEuDHYWMa2G8ICOyCbMFxLYt7snlnNUXCqGqIe5zYFm+YimJbbhxpCE7Y2JpSPh4HREwGJmWqZiRzSoHuIcjOyqRNIrzSGzkV5uyplJORFdofJ2A0XMkUlTH48jg9mLogfA1cGrTpN7lyGdNco8+p0xsNRJcMcqpqt9Zk/ISyr1MaGrfD3Gx1N+R/o/02Fk25J77fJn49Qmqscsl+R/pUvVxS9znIrqfuGp2TaYmxUmE5C5SMtUAkxMpU7CnIr5J+5z60m1ahkXqWYTvyY8c1Sqy7gyX5OTdomutOLtBplfHOxyYSrE+CvlWzLArIrNsTpM3NHkoZ4WmauWF2Us0NnsdU8fpnWNmi0yu3TL+ojyUMipsJOemYXIHva8gNguRfAcs04vZhLVzXLKzZF2P8yhdjq29nY6OpbM+BZxRtmes5gX8eZtF3DJumUcMN1saWGGyMbJGkWcSbRbxoRjjSRZgtjKtDIoYlRCQRNNBzOIkQASIXJz5Jitzo8c9pq3iWyDa2BxLZBvg9THw58CnTDTFPk5MfeGObK82MnIS3ZnqmXk4ZWmrLM+CvNGeql4VSIeQU3SFObvk7eMVr4hPxPcpPId8Rsf5HW50rU48erg8iTVq79D0HwXCbS3i+GjyOgwzzZV23zyez0EksUMWR2kqTZy+bHv00z79Bxwp+eS04dqT4LkNPBNNJOwdRicpUuEYaxyL4zZRbfqMhh+W5NL6j2oY1vuytklOb8v0SRl8J0pwi6tt+iQt5W3SVL2OcG93F/gCoO6apB2knu3DW68L6sBppktNonhOU1foNSTSdiXB1xuMhBpchw4NUkSqYNV7nbv1DhmJK9hkbSFQtPcct1uAEm6syOr6pxrAnu95eyNrHDudM8nrJvJrs8m7fe19ydFc9dK30FPY5tkRCMkOUqY7HPcrvZhQnub+LXBVxx7o2uTou1T5RGKVqiZLtla4Z1w4XKNM5IdJWrAoZgohxsZRzQlKmSGxRzKmzUyRtGZrWscG/IFWLrcnfPtXC5ExQU4tybfkmKNfkSOCHwQqCHwRlqg7Hwi1jK8CxA5tqixB0PTK8WOjwc+jNTslAomzMJb2E5BjFyNMxNJkhM0OkKmdOSV5q2La3HtJsXNpG0AEqOcq4AcgWxl1E3Yp7sJsFjhBGRW4KQcSiNhs0XsLqilj5RdwqyTXsc9h3ekitjdIJypCNOTJeyKeR2xrbK83vQGPHyWYNIqRdDoS9DLUJbUwlbFQtvcamkZ2GYlSBbIcgWy85K0LYDZLYDds6c5RRpjYsQnTGKQWCLEXQ1S2KykF3UZWKPcwHMU5sFzDgHKd+RUnZDkA2VISWQyGyLGHNAs5sCTF0wydAfEp8gzkJnKiQtxye46GT3MxZGnyWIZL8kk0YZK3TLWPPeze5lQyDoz9wDXjlvyNWTbkyoZmvcsRzprncm1U0vPJtyLlMR8VeouWVGej6bkmknuVcmS/JGTJfkrymYWdqbXSm07TLemzXSvcz2zoZHCSaew7j9RPXpMOS1yXIStGNpMykk7NPFO0jl+XjbN6tp2RNAxYxUzq8HumqZIFPNDZmnkgU88Nj0s59M6xtRDkzM8KZs6iG7M3PC7M9ZZ1mTVMCx2WNMQ1uTA6zkcluMjBtjoTji20i/hhwKw468F/DDg597VIbhhwX8MdkIxQ4LuKJhdNIdjWxYghUEPiqM+qGkSQiRKQc+CQZDzCoGtyYLc5h41bOnxz2mrONUkGwYbII9HPxU+FSAbYcxUnsTokSdoW3sQ5bkXZyXfsOk9hTQxgtWRdh83nKlQlsObti2exHOgbp8TyTSSu2BFOTpG90rSKEFmkvoLV5Di7o8MdNiSpdzW7LX6Q4JteCtOV8C2273Oa6n1rn60undWlPIoNNb+Xf3G3k1D7VS3Z4jHHLi1iml8vLbeyL8PtFppNQbbadW1S/EjXdT/ABdHk3nk43ZOc23bX0R3w3VuTf3mbDqOmyQtZOyXuPx65zjTmpJPk4tW5+sVlxl4b/E5d6VNdy9yMeoi/Kv1Y2+5XyvVEf8AoAKMKdxa+m5zhBJNNtPykNST2fDJhBQtPdMf76fCU41tGvd7nKUrq7X0DcU22qa9ga3D9UhRipJWmvoH2wW3dX1REPU6St7D6Y2oJXd/RALm07RyVqrIUWraY+9JZxOmjxuSXfnyT/3pt/i2eo1Gpjg02TI3TUHW/LrY8pB7JFX1lOjY8B+AUEZVIZIWnTHPgRk23KzRVnFP3LSqUTNxzpl3DO0duL2CU6D5TIapnPZpoJ7qzRQCUcyFyJSJrYxtenOVLhGzmlUGZeWFttgVY2THuKSpmhmx1exWlCmPpBithsEAkNitiLQbEdB1QlDYvgx0FmDHxK2NliHBz6M1cHApkpkBL4FTYbdCps0zElTdJim2xrVgOO3BvKRT4EZG2WJqkV8iNMik2yGgqINSA0RQTRFCJFBIgJKygdiW5dwlXDFsu440kKnDk9jnujkiaEC5cFTI9y1ldIpTdsAOLbZYx0irF0NjKiLAuRnQantyVFMNTFMDqx3nOViVImzXOeFaJsgizkzRIkGmAmdYrQYmT3e4ruOciOGY5+4LmLcwHMRmOTIchTmC5i6DXMhzEuYLmL2DnP3FSnYDmLcxAU5CZSs5y9xbYBzYUMlPkU2Qm0w4F6GS/I/HkM6E2mPhNisJpKdhKXuU4ZGh8ZpoiwH/ABH6shzb8gJnMnhJcmLbYTBbIuTC2A2EwGxSEt6LNT7Wzc0+S0jzGOThNNG1o81pbmHmzy9Xi/024OxiZWxTtIsRdoXh1zTWibtCM0bTLCVoCcU0ex472IrH1EOTNzQ5N3UY7sytRjpseozsZGaG/BVcGaeSFsrvHvwc2vVJVhjbZaxYuNgsePfgt48fGxhvyKkDjx+xbxQqjoYyxjhucutdXIZigW4RF447IsQVGfVjiqGoWg0yoY0zgLJTKMT4Bb3JvYBvc0zkqkZjVsUnbH4ludPjiKsxWxzJXBDO2NP6KnwIm9h+R7Mp5J8onXwqBu2Gt0V+7cdBnneT6QmgaGNWgWjC0PmD5Bq3RL5H6XTyzZEknufQ94wix0/RvNNN7Jbtm41SUI7JbIjBjWHEscVv5fqXdLp+6SbWxy70uTpWHSTyOqe5HU3pumadZdRJuUm1DHHmb9vb1Zp63VYOmaOWozv5Y7KK5k/CXuz571DXZ+o6uep1D+aWyiuILwl7fvH4/H+vdXqzLtbrsurm20oY3xCL2S935EJ0BZ1nVJJORj1rdNzY5zWLPPtT4k/D9/Y159P1GOpY22mrTTtNezPKKTTtM0+n9Z1OjqCffi8wk7X3ehy+bwXXvKpZ/bYhqNThVNdyXryWtN1SKlWTug/VcEaTrHTtWuzK1ik/GRbX7P8AMt5Om4ckO+Oye6a3TPN3jn+2eNJL/S7g1Smk4yUl61ZdxzU1TSTPMz0Gp01zwybS3+V/wLWi6o01DOkn/vIy5Z8V3/raniafciFBT42flDceRZIpppprZrydOBUPhLg0/wC8hbOiZJqRFpcumOASTTtIRq9Rj0uJzyOl4XlkZtdDC+y05enoHhyafWUs2KE6WzqmvvOnxeOa+peV1euy6zLcl2wT2j/EiB6bWdAw5oOemdSrZN8/eYGXS5dPNxyQar1Q/L47lFlQgkCgkzlpJasTNWhwMlsEoqpumWcM6pMVNbnRdM6fHomnGVxJi/BWw5LS3HXTs6VSjewKe+5zdoCcu1N+QUVnncqXCESVoK222/J1WCVXLBNPYp5MdM1Jw2KuXH7CUoVTDQcoUClTIpDQ2PIpIYuTOg+BYiyvAfEx1AajjkcZhDBasOjqLlBTiA1Q5oVLg0ySvkRWyItTTK00b5STRDQVEM0AGCwmD5AOQ3HG2RCDb4LmHFxsUOGYYcFuMKR2HHSLCjSDiiWqIaGNC5OkxEraiVKii5bj9Tk3pFS7YxTVINSEJjE7FxJykMTEJhplQj0wkxKewSkUDbJsV3HOZNoN7iO4U5kOZINciHMU5gOQga5gubFtkNhwxuTIcgHIFyDgG5AOQLn7i3KxAbmC2DbIsOBLZDOJAAaBYxgtAA3Q3HMVW5y2Cku45FiDooY5lqErrcmhbi7D5FQaY1cEU3NC2hzQLRFJXYLdDZIU0SQWy3os1PtbKTsiM3CSafAaz+pwPV6bLaRfxys8/odQpJOzZwztJnDqXGm2b2LyaOfAuMg7O/w+aWCxXyrZmfnxppmlkWxUyRtHZ+pUWMjLjp8CHDfg08uO/BWePcw8ntMIhj3LGOBMYDoxPP8AJ2NIKEdh2NUBFDUqOdfDYjExSdBphAcmTYtS2OczWGZ3HKW4lzOjO2bZyD29iLBTtHI3zkhx5LWJFeCstY1sjfETPpy4IZKIlwdLUjK6RQzSpl3M6RmaidNka+JqE7Y/Gynjlb3LMGcHlhLKZLQtMYmclD5ljxuckkrs3tFp1gxptfO1v7IraDSqCWSat+EzRgnN0lye75NspDtPjeSSVbGr34dJp5Zs01DHBXKT8IVpMUcWNzm0klbbdJL1PIdf6w+o5fhYG1pMbuPjvf8AvP29F95jjH7rTv5it1nqmXqmr+LJOGGFrFjf+ivV+78/gZzZzYLO2TjG3qWzrBvcmxhNkpg2dbAGxk15Lem6hq9K70+ecPVJ7P7uCgmGnZnqS/Tlr0uk+1OeDrVYYZV6w+Rr+DLmfqXR9VgnlWoWHMlahKDTb9Ntn9x49cBJnNrwYt7xf6r2PSOrYlNY1kcovw9mvpZ6iE1OKaaaa2o+Uwk4tNOmbeg+0Wt0kFBqGbGuFO019GjDf8f33K5p7TVZIaeEsuaahjirbfCR5XV/aSWbMsWix9mNunkny/ovH3lPrHW59TwYsbxfC7W3NJ2pPx77bmXpd830Rfj8MzO0rpqx1E3O2223u2+TS0mplBppmPjTbs0dPBuib/j8R7eu6dqviQSb3LWo0uHVY3HJFO1yuUYvTpuDSaN2ErSadpm/i8ks5Ws9x43XaSei1MsUt0ncX4aEI9j1HRQ1unaarIk+x+jPGq1s1TWzXozl/keH/wA72fKjU4KznwRZDZypLmLsPI9iu5U+TbCas4p00W1O0mZsJOy1jntydk+HFhT25FTl3beBUslOrJi7GpLVExJq0clQzS1aFZIWh0TnGyTjPyY/YQ4UzRyQvwV8mOiaKrpErkJqgVyZ2JPxssQZXxj4+DLUM5HApk2Z2AVo4hBUEAGhbQ5oBo2zCVpxK+SJcnB+gmWNvwbZLim0A0y28Lfg5YN+DUuKfY34DhhbfBdjg9hsMG/AHxXx4eNi5hw1Ww3HiS8DkkkM0RikiXRLdAsAhorZ2op7lmTSW5la3OraTAKeedzdC0C3bthIE0xBIFBICGmGmAkGVCokzlIE4fAOyLBvY6w4BNkNgtgti4Y7BsBshyDgG2C5AOQLmSDHIBzAcrIsQE2DZDZDYAVnWDZyYgJMNcC0GmATRzRKOJoLaBGtANB0gp0x+OYiqCTaYWE0McyzjlZnY5+5axz2RnTXE7RzVi4SsYTTBJCZIsNCpIikrSQtofNCpIqUjdHmePIk3s2ej0uW0tzybbTTXKZtdN1HfFW90Yefx9n6i83lejxytDU7KeGdpFqL2OOWytnSViMkOSz4BlC0dXj89nqosUMkLK8se5ozx+wiWPfg6pv9J4qqFPgYojFD2CUDHeenAJBUM7H6AtUc+vHVORKdAWQ3RExTNcgHP3FudC3M2zkz3PbkKMtr9So52Pwu2jpzkluPASQKXA2Ks24Dca4LUFQjGuCxHg0yJPYgJMNsXN0jRdVsz2MrUS3NDPLZ7mTqZfMTUJxumWYSKcGWMcjk8kC5F2hiZXg9hqZwaDy0FNtJfga2i09JOS3E6HTNu2in9oeq/AhLQaSVZGqyzT/UXovd+fRHr5l3Uz1O1T+0XWFqG9DpJ/yEXWWae02vC9l+1nnZMNpJUlSXCFvc7MyZnIzttoWQEyCiQQ0ScBoOs5o4QTe4SYByZNBqYaFRGIimYibATJsngS2N0jXx1b52ENjNOn8RVzYrPQjcw41fBo6eKikqK+GKci5BJHFr3Vxf07uq5NrTu4IxNI/nRvYUuxNehXhz7XDVweG1tLXalRVL4sq/FnuVweL6vgeDqeePicu9e6e/77Or+XO+OJ18VDmzrIfB5bMuZVmty1NWhM4mmaRadD4TryISDTo68Xog8rtJoLFPjcVN2heOdOrLsXGnFpo5oXilaHVYjQmGlYDQyKtAAuNickNuC1QLjfgXDZ08b9AFjd8F+eP2A+HXgmwuERg14GqLSGKHsEoGdyC6YSQzsCULIuDLUbGKFhxh7DVjHMAjsI+GWuxehzgvQ0mRxUeNegDxr0LjhZHw78GkhKaxW+Alp9uC6sVeCXBJFcCmsaXgJRSGSQND4HHHHUUbiG6W5zaStlPV6uOOLV7gXA6zUqEWkzDy5Xkm3e1kajUvLN77Ck7DnCtOTsZEVEdEhAkg0iEg0hwCSJOSJaLhIIslgtlcDrIbBbIbACbBcgHIFsRibIcgHIGxUCciG7IOJ4HWc2RZzEHWQ2cyGAdZ1kEpgBphpi0w4k0GImtgUGiAhrYFoZRDQugpoihjRyQ+k6Ow+EmhSQyKZFC1Cew+MtipBjoyI6DrtANEpkvcmwyJLcVJFpxsXKHsECpJDdFlePMlezJcAHGnaNJP1OHx6jSZbitzRxytI8/03O5JLyje06bS2OPXg1+uSNcnrdBUHHDNJNpkpNco6PF/A3v76FKcLeyDjo3JW0l9RmNpTTaLq4PS8X8HOf8Aa9DKyaRwfH3k4dL8S7aVF7NYiE1jbbNp/GxKXBR0WOnbbfsIz6FX8s69mPepaultQtZviNNO7Zp/44s+Gqrp2V7qmvqKy6TLjXzRa9zbxP5UE0mt0Z3+L47PQeWyQaK03TPT6rRY80G0kpeK8nn9Xpp4pNSTTXqcnk/jfn4Fbvt0X9Km6ZmwTc6NfSwainRlM8oWIxsdGIMIjYoqgzGhy4FwQ1cF5PLmIyPYa3SCeFONN7mucXXw6yNRKrMjNK5M29bp5pNpWvVGDmTUnZGs2X2kcJD4S3KMZpOrHwnfk5fJAvwnsNUiljn7liEzz/IXVLq2vXTdOsOBr9Kmtv8A6F6v39F9543Im222227bbtt+pc1GSebLPLlm55Ju5N8tlXIj3MyScRq9VmgGh0kKkadSBnBNAtB0IOokgXQg5nHUHTRRKQSQSQdAUqGJHKIaQgFBVsEok0IAou6DC55U2tluxGLG5ySSNrT6f4ONJr5nu0Rq+uBbw82WY+pGkwvJSHzxKM5Q8xOa44cHgyKMkze0mRTxqnZ5uDaZr6DM4tJvcM/46lXmtZGL9pNKp6WOpivmxOn7xf5OjaTTSaByY4ZsU8WRXGSaa9Uzu/P7z+f+qseBvc6x2u009Hq54J7uD2fqnwxB5Gs3N5WLnwLmhjAYoCWqIsOQtnT46SJPYR31Ma2ytlbTs6YqNLBPguwdoydNO6NTA7RFUY0FBbk1sFFUwMajaOcBkVsS1sBq7j7AuC9B7QDVAC+yvBKiHRKRPCCohKKCSolC4ExihiQMQ1yEgdR3bZKVhqJfABQ9jlBIalRDQzLaSFz3Q5oFxtDJVkqBobkVCm0luxhAM5qKtuivqNZjxJ7psw9b1RybUGVJ0NDW9RjjTSaswc+qlmk22VsmWWRtttkJ2X+eJuv+HJ2NiIiOgyNJWIDYioDYmYNihiQuI1IqElI5koiRZBbFthMBsoIbAbObAbAObBbObAsRis4EkmhJxxwqHEEkEhBxJ1AEUdRNHJC6HLkNApBJUTQZENARDRNAkdRKJJAKJSCSsJR3J6QVEYokqI1Im6AEgk6J7TqojoEmGmJuiVIqGejmrQtSDTtDgBJewpwse0N0OmlqtVDHBbtm3jn6vF591p/Z7pkss3lmmoLl+vseshjhBJRikl7A6fDDT4IYoKoxVIcevjxzMaW/8RQjLFLdFgFxT5RVhKij86otQ/VViZx7ZprgbF7Ch8RlTaTSuilqE6Vcs0LTK+fHbtcIYUJttKDe4eJNZIrwFOG6l6EK+5P0Yz40car8AwIO0n7BWvUCrm65EarTx1GFppXWzGzaa2ZCmkt2TZ0c9PLvD2Z3FqmmamGFQWwPUcKWrjONVJXt6jcaaVPlcnnbz+dUv7NSoOK3AtJW3SE5NSoLYWcXV9FV1NJW2kvcF6jGnSdv23M1znmaTbpl/DgUUrW5158Ek9rh+K5tSapLix74AikqS4oJqzbOZmchULgpJp7p8nmeq6f4eVtLaz1CVFPqOmWfA6W68keTHciPFTlTGY8t8MXrMbxzcX4K2PJTpnnbz1NauPJ7lmE+DNx5Ni1jnwed5cpeZk+RWTgNsVkkevCKkKYUnuA2UTmC2c9zkgDjqCSJoXQBIlKgqOr2F0OSCSORKYdAkg0gU9yUyegaRyVuiE7LeiwPNkSrZbsOku9NwKCebIrS4T8suRTnO3vZzSVQiqSL+k0/yKbX0Ea30zTu035E6jNHLr80sa+R7J+tKrLWryfoumUIusmRV9F5f8DKlL4eFyXL4MvJrnppJyDVqdP1LuC9pL9ZftMjS62GdrHkajkT2b4f95uaWPfFOOzS3KzJqJjS0moU40+Vyi6vYypQeJrNBbPlGhgmpwTs6PFfzfzVxifabSOUcergr7E4Tr05T/G/xPNXTPoWWCyY5Qkk01TT8o8X1Pp89FlbSbxNvtfp7M5v5fj7r9ROp/ajYMmc2BJnHIgMmLbObBbOjxwnNlfLuhzYrItjpgiNPOpUbGnlsYePaZraeT2J3GsaUXaDWzFY3sNsgzYPwN5RVi6Y5S2GEyoWwm2yACErDpJAp0S2xBzIT3BbITtgFiCsYo2KxsfFgBKIaRCexKaKDqBaC7kA5IOnxzRDpLch5KKmozOnQdBer1MMabbRg6zqj3UXR3UMs22m9jEyttsrPtF1xOo1U8rbbZWbsmT3AbN4i211hRAvcZEKDYjoCYIfBGWgfDgdETBDoozBqGIBBlwhJgtnAtlEFsWw2xUmMBbAbCYLAwsgmjqEHImjkiSehB1E0cT0Io6iTqEEUTR1E0IIo6iTkhByW4SRKQaQrQhINI5INIi0OSOS3CSJSI6HJBpHJBpE2klINI5IJEWmithcthj4FzCAtshSIfIKtvY0kByb8DUxUVSCsrg4bdnpPstp125tRJb32Rf7X/Aw+n6TLrM6x41d8vwl6nt9FpoaPTRww3rdv1flnd/E8d7+q0zOTqyccQ3R6KnWRaIclVtoU8lPjYRydNmlJULuk0wXkd7FfPOSaSfLFxcn9LUJdttvk6c4tLdFVt1yA22ueB8P8y0xzSbXqApJSbS2ZFbpgzdJ+qDhnx1DjC74Fzy5GrUqv1Kzi+1NOt7a9RrmqS9APnBrNJw7W90KnlfYt2Lk6ntwxOadQX1GVgv0lSywvdxbdepaxNQxqU3u939TK0rbyZMj4WyLDm2lu/ozHfjm6Vwdm1Hc6XHhFffJNJ7r0Act6St+S7o8LT75bv0LmZmchTB2jwtSc5pJLZIvprwV03KSS2iv2lhUlYzs4mErlVeRguK+e/CQblVLywqKk7Zgykkvdg96Sq92LpceU69p+zM2ltZ52cqdo931jTrNpnNLeK3+h8/1UuzK4+jOLyZ5U6W8OVVyXMeTjcwoZmpL0L+HNdbnD5sekMuc6RXnNth5GxEmd3CQ3ZFkNg2PhjTJQKYUSaBpBJHRVjIoztHAqJziOUDnAn9BXao5WPcPYBxY5ogphIiqOQyNxxcpJI9H03SduNKqb3dmT0vT/FzptbLdnqs0Vp9A2tsmTZey8i+qk/tQwwU86S3V8+pvYlBKuIwVt+iRi6LJHFkUpK16F/PqILTKEZK8rp+y9PvFq+zyq6jNLUZ5Tlsm6S9F4Keok3slskaWTTuGD4q3TV7eN6/eVcmHt0mbM1+rBv8AYc/kvFV5nuanafk9D0fqUoNQyO/R+TzTfzl7SzcZporVuZ2M5ePoODNiyQptK1wxkYPDPbeL49jI6XmU4q6fhl3Xaieiw/HjcoJpSXon5L8P8ibn+TbjSUlJWhGpwQzY3GUVKL5T8/3lTB1TTZkn3JP2fBdx54T/AFZKX05Ou83ODry2v6NPG3PT3KHNVuvqv4mJmxzxv54te/g+h5MSn82NtS528mZmw6XPN480F8Rc9u0vq15OLfiuL7Rc9+PDN7kWeo1X2dhkt6aab5pbP8GY+o6PqdO3aa+qovObEXNigBNWh8tPlhzHb1EtGsIrHH5zTwKqKMF86NHCuCNVpFrG9hqYuC2GRIUjup8jsbtFVv5izi/VsfQa6FtpHTnS5M/UarttJhFcXviJPk55ElyjGetl6gvVzae4+E08mphF1aIhqot8r8TzWo1s/iUmTp9VNtblfn0nseux5k0tx8cifBh6TJJ1bNLE2zPqovKbYSk2IghyQv0YtyJEnMXTKkipnWzLkitn/VY0157Xrkw8r3N/XrkwM2zZt42NIkwGyZPcGzpgEuRkELiNghaBsEWIITBFjGjDRw2KGxQEUNiiCGgkCkGi4TgGwwGMgNi5BNgMOgLBYTBF0IomjjqFabqOJolIXQEmiaJJAaJokihB1HUTR1AEUEkSkEkTaHJDFEmKGKJnaAqISiGoBKOxF0C1HcJLcJqiUhdAUg0jkhij6k2hCQSQSj7BdpIKaAlGyw4A9ll5yrio8bbD7EkWVjrwLlE6M5VwhKjroNot9L0MtbrceNJ9qac36JcmufH28Ez16/o+ijo9DBdv8pJJzfm34+40TkkltwcetmcnFJFZZKMGE5pcbiMzck7GrM9ocqVgLc604e6ITfkGnEp+BWZ7xXuFJ9rvwKnJPJGnYAbk6QEN20S9myFSdryB/wBJT2a9CvqMvbS5bdDMmRQnTfIhLvk5tWk9gENul9EC222c3t9UC3UU7A0TlUUylnm5QbT2THZ51ibvgUsN6NSd3OSsVXnMvsWGPZplfMt2c5tJvz4DyOlXhKkVVJzypcK92EKTq3p4X8758I08KdexUwx2Sr6FxPbbhbIFanDMTubVDYq5X4QGGKScn52CUrdJ0vLBjr3TE0rfqA93s973YmeVuVJeyDi2l/Fk0ucTOajty/QW8jjynb2JdR3bv3ASdubpbbX4Fwq7Uty0uRPntbPm/UFWqmvc+h55taXNK+IOj55r5d2pm/cw8rLXxTHYcri6bEtoFv0OfU6hExEuB8kJnsaJIlsQmFIDyMzE7DjuKix0FuRTOgh0EBBWWMaMNUxRiE8boOER/Za4MLrgUpQrwLcS5OFeBEoFzSVZoKELklQTRb6fp/iZla2W7L6Tb6LpVFRTW73ZY6nmU8qxxfyw2Qemk8buK3apexTyPvyuV2r2Kyu/ODwY++SXoW4YVLO1SlFpJNre1wl6O9vvMjN1HDppqNub4cEuV7vwb2gzYdRjWVNKLpbcp+G/2D+08iacenvTytTg4Sl6tP8AvK/VnDD0jJjW+TKlFJburtuvojQzwwT1eaLtNRjN3xT83+Ox5LqM88tZmlPE8EG6xwprbw/dvmzHyY/y7f6Vq8UI6XLKbaxyr6Mt4tPODTcGvqheDDKbttv7y/jxuKVWY+Tf9MuLnTszxzSbpHpYdmp08seRJxmmmvVM8mpSg+b+pudL1akkpNX5MMa/Outc3+nntZpM+gytStJSaUr59H+B2HqGfE1U269zc+02jWTBDWY182PadLmLfP3P955k6bbm+ka9Vs4ftBmhXem0gtT1XR6+ChqYyhNfq5Y7OL/ijDYDLnm1zlL9NXNPW6bF8XSa1Z8K3dNNr6p7orL7QdTiq+PFr0lBNftKDXItoeL/AMK1oz6xkyKsuj0k2+WoOL/YyjN983LtUbd0rpfS9xdBpWdEvou2/RQjvwX8KpFXHHdFzEqRlr6qHx4DQEeAyWhLfzss49oIq8zZaW0V9B0oTnl8rMfUNuTNXUOkzJzbyYRZD3ZDdRb9gntyJzTqL+hcRWXmleVv3LOl5RTk7yv6l/RRto236yiNzRJ0jXxKkjM0apI1IHLW0ixAckKgNXAlJohoLwQxEVIqah7Mtz2KGqlSY4msfWvZmBqNpM3NZK0zC1L+Zm/jZVWb3BOfJKOohRHwEx8D4IjQOgixjQiCLEEYUHRQxICI2JMAkiTqOKJD4AbDYuQWkBi2GwXyLoQC+QmRQdNyRxNHUT0Iok6jqDoccTTJ7W/AugNE0EoP0CWNvwT+oAJEpDFifoEsTJuoC0g0g1iYah7E3QRGIyKJUKDSM7QhBJEpEpEANEqG4SW42EL8C6ARgMURqx+wax+w5nqpClCwlD2HrH7BrH7GkwqRW+GcsfsW/hkOBtnJ8VHCkKlAuSgJlHlI1kNVWNzmoRTbbpJeWe06T0+Og0qhs8kt5v39Poil0TpSx1qs8fnauEX49/qbp3+Hx/mdp/HN0mxMsjaVbBZJbNIV+tHY3PMcnTrwDPendHS2V+gLaobSBeza9SF6Ml7oFyVApGX9Rp+oDgk1Jbex2SdwdcoCGTuSXqhFJUydtHJ7NegDadryg0+H6jMiaWTK1LelQUUowpcLYCLSlN+4baaaEHWtivOba7UMTt16C3vf1AynBzhXl7FucFCGGFbJNsDTx7pRTXmx+oV568JJEaqu+lHMm2l+JGHGnnS8IdOFu15GafHScvLdIfTzVjEqTf4DoKkr+ouMGkl6jafjkcLVHbpJeAMmRY47v7vVkOSSq1Xl+ouFZJd74T29wTw3HGvme7fj0Dcm3Sdv9gDdUuQo8Uk22SVQ1NtJNX6+hzgnty/Vj4Y+1O92zsnbixSyS4SsTOsfrerho9E4NrvmuPRI+fajJ3zb9WavXddPUauScm0mYmRnNvXay1UORFgWSnZmzNaEzRYaFTRaVaSFtDpoU0M0we4/HyV48ljHyRpS1jXBYxoRjVlrGkcu1HY43RZSVUKhSSGJ7HNoByR9itOJbbtCJKys3hELG5ySSN3QaR4YK1Tmrv2K3S9N8XOrXk1+pZEnjwaVXkTUJvlR2uvrRpm205FfUtNvHjnSX67T59vp6mBreozbePTyqK2c1y/p6Gn1vUQ0mFdP09PI1eaflX4+vqedaN5eROvoUndnrej5lj6ZqZKClPHj71flJ7r8GzzWnwSyTSStHq8OlWHo2bIml3r4bk3tBPlv2SHn3TwjreaeTpE3pm28kEpPz2J219ePuswsKnPTwUpNpNtJttL6ehf0XUXly5KjeNSXbBr/AEeF/f8AVisWNRyLHH9WLaX0sx82uQa9+1jT4UoqkPcEg4QpJINxs8zWu0RXlFUL0+d4M6bezZacduCpqMXlIrNl9Ur369XpskNVp3CdSUk00/Ka4PHa/SS0WryYJbqLuLfmL4f/AL9DX6Pq3CShJ8Fv7R6T4+jWrxq54Vbrlx8/hz+J2Yv6zy/Yu/5R5Mhkt0gWxSMkMW0G2A9zXMCA4oAbjW5vPhHQRZxrYTBFmCMauGoJ8MhcHP8AVY1dJgrmW5bJfQr418yHzdIDirqHsZWZ/MzQ1U6TRk5ZbvcqKtBNlbNL5WMnLYq5pWmaZiKpLef3mtoI7JmXBXI2NIqijTy30MxsaZ1RpY3sjJ0890aOGWyOWtYu42OTK0GNTJUfaaBb2A7iHLbkAjI9jL1s6RozdpmL1DJTasqRnpmamfNmNqHcmaOoyWnuZWWVyZ0+OMqBkLk4lcm5GQQ+AmCHwRnoLEFsOghUEOitjGmbEdHgTEaiCGccmc2MkSYpsOTFNgEMhnWdYgiiaJW4yMLYreApRb8BrG34LMMO3BYhh9jK74fFBYGxi079DQjhXoNWBehnfKPyzVp/YYtP7GksHsGsPsZXy1X5Zy069Bi069DQWH2CWH2M75T/ACzvgJeDnhXoaLw+wLxL0F/6j8s54a8EfD9i+8XsBLHQ/wD0L8qfZRPbQ9wAcSv0XAUSkEo+wah7DnsuBhEsY4EQh7D8cTbOTkSoDI4/YOERyijWZXIUoewah7DVAmkjSZMrsAcUh0mhM2UCMjRc6NolqMrzZI3jxuknw3/cUcjPR9Hgo9NwtLeSbfu7Z0eDM1oL4MpU69QmKk7dpnacQ+GmKg9mvR0ht2hVU2m6vgao5z2piptL6HZLttsU5tbPdA0ju9p2uPQGb8pgSbW63XoLnkSi2ufQFOeRNOmm14IwpvGrfO6IxYU4fOt3uyxGKUa9OAV0tp7NeeQlfa0/AaVqmc1sAtVIN036smb7WrCUKWx0oprdWJNBjVycvUlQt16sJKkg41u/QVvEdO0eO8rdbJAZH3ZZy96RZ0ddkp+tsqt7P3dmNvs+ghBznSLGHGm0ktl+87TQpSnW9FrBjpWype0+8geyt2KyTq0ufLG5p22lwVckVk+RN+9Gh5/6BJ5Xz8vl+o9KkklREIqKSW/jYYov1peiA7XQhb9WWYQSVvn1F447quENSbkl48iZaoqbfojM6/qfgaJxTptWa3B5b7UZu+TgnslRHkv5yiV4nUTc8spPdtsq5GWM6am0VZs5WVDZKe4JKYJXGhckPoCS2KQqTQlotZI0V5oIotbMdje4l7MODDUVF7HKi1CaM+EyxCZzbybQhIYpFSE9hqmc9yDmyEnJpAdxOOaU03wTwNSM59P0L1CpTn8kG/X+4y9Nr9RpMryYppye7clab9XfLHdS1i1U8cMd/CxQqPu3u3/D7jPZrmcFvv0PU6l6huU8UFkk7c1abfm96K9bhMhKy+obPRM2jwzb1koqKTe/l1stin1HX5dbnyVOcdO5XDFeyS42/aVUgo423sg/fJw+3nFvpMnj1eNtNpumvVF7TNT1M5JUnJtL0VlTTpYUnXzvZe3uaeHGo5slKldr2tJ/xOfz3/E1mKDq2FCFoNQaPMtaFOO3AnNj2LjWwvJG0PNKxlqTwZlNOqZ6rp+ojqNP2tJpqmn5PM6qG1otdF1Xw8ig35O3x75f0nN5eMvqmkeh1uTA77E7g/WL4/Dj7ik3R6/7SaT9J0C1ONXPDbdcuL5/Dn7jxsmdVnvpanKhy3OsFsiyspGnuOx8iIvcs4Vua/0S1jQ+CAxrYfFHPq+1RKRD/VYygJLZhKpGJboLJwTiW5GTgfTjN1jpMycj3Zq617Mx8uzZpk6XOWxUzPZj5sq5Wb5iaHCvmRrYHSRk4uUaeF7IPIrLSwyprc0cM+NzJxS4LuGfCZz2NI1ISGqWxShPbkcp+5Cj3MjvFOYDn7jgOlLZnnOpZ08zSfBs5syhilJvZKzyOp1HfOUr5dmmM9rLd9l6jLdqym92FOTk7YJ15nGTjlycEluMGwXA/GtxEEWMaMtBYghyQuCGpbGVMS2GJgLgJcEgVnNg2c2BIkxbZ0mKchkJs5OxTnuHB2FgWMasuYcd0VcKNLBC6OfeuKkHDH7D44/YZCHsPx4/Y5NaXIXHH7DY4h8MaGxxmN0rhCxewaw+xZjjDWP2J6fFVYvYJYvYtKHsSobcEnxUeL2AeL2Lzh7AOHsA4oSxipY/Y0JQ9hUsYdsTxnvH7CnD2L8sYqUK8FzSbFVYw1Aao0w0kdOKXC4wHwjRyoNNI6skOC2GpC1JIJSNoqUy0gWwbsiQx1EmJm7GSYuQh1Xnyem6Pt0zBfo/3s81JHo+j5I5On40uYXF/VM6v43+1OLmSVJJcsVuMyK17oBb/U679PqLX0YGRpum6Yb2e6FZYd3zRe69RqhE522nyLaathZEpc2mvIptpU3a9UC5XJ2+Bbxp5rlumh8Y2ruwckKXd6CtPo4x8BqFnY1e5YUEHU/olQd2Q4Piiw0k6ZzirsXR+1P4dASjsWZqm0Kauif0XSWqQuU+xP3GZHSsqxTyZU29k7SMtbH1qwax6NvzVFTngZqZ9umjHy2DgXdJGf67VSLuGFYkq3Y+XyY6XLIxpNr2DyptbG+P+pt9qOVtp00vdgYla9vbyOlC21WxDagqVJF9a99OVJcBLfhWLU097tDsNybpUl5oE2jheySosKkgElFX+07uvgO8ZX2nJJQg5vwrPHdVbyTk3vZ6jW5O3A1e72POauFpv1Ofza76L5HkdbCpsz5+Ta6hj3boxsmzZllmWiUwUwolJaLRDWwVEPgaSMiKs0XMiKuTlgatM6LJmLTplGsRkPxzKcZjYyM9ZNfhMap+5RhMdGfuYXAWlP3O72V1MnvJ/IpzkQ2LUrJsOEl8hxQEd2NihUhKI/TzWPIpSVpPj1Aig0rMrTHqcyy6rJliu2Lfyr0Xg1dHleVuckk2kml7KjFmqaXua2hVRTRl573J5vtrwVJB0BB7Iat0ec1iHDYXNbUPfADSYBn6iCaM9N4cqktqZq5o2ypnx3G64N/HrnplqPQ9N1C1Gn7ZU7VNPyjxnVtE9Br8uCn2J3B+sXx+HH3G10fVLFl7Hw9i19qNH+kaCOrxq54d3S5g+fw2f4noeO/rPFf7ZeMZATVAs0lZijyW8PKKceS7p92jX+iX8S2RYhEVhVospbHLu+1wLQMlsNaAkhSm7Ggci5G41sBmVJh0MfW3TMjLyzZ1i2ZkZk9zbFOqU3TK2TdljKqYia3OvKUY9mXsM/BRWzLOJ1QaVGniZZhOjPxzqizCaMbGkq/DM0hq1CooKSCc0k23SRH5V1t9Dj+m9VUHFSw4oOc01abeyX47/cemyaPQY8cpz02JRim23BbJcmf9ldE9N0xZ8irJqX3u+VH/AEV+G/3ivtd1BaXQrTRlU83PtFc/i6X4m8zMwrXketa5TWT4aUFNtpJUkvQ83NtstavK8k6vZFZqysTkYa93oGC+Amjo455ZxxY03km1GKXlt0l+LNYT6J9jei6DP9ncOo1miwZsmWc5KWSCbq6St/QT9t9F03p/ScS02iwYc2bKkpQgk0km3v8AgvvPW9N0kdB0/T6SHGHGoWvLS3f3uzwf28z5dV1fHpsWPJLHpoU2oNrue74XpRPe1rfUeXgixjQMNPn/AOBl/qP8ixj0+Zc4cn9R/kZ6ZGQWw1I6OHLX81k/qP8AIYsOX/hT/qP8jKmhImiXCUNpRcW/DTR1EgDAbCySUIuT4R6SH2PzZIRmtbBKST/m35X1KmbRy348pOdCJTPXz+xOeXGuxr/tv8xT+wuof+0MX9k/zLmKPzXkXMPFPc9T/wCAtT/6hi/sn+YUfsLqE/8A8wxf2T/MdwPzWJilUG090tj6B0bT6fJ0jR5Z4MTnLDFtuCbbow4fY7PGDX6djdqv5t/meo6fpno9Bp9NKam8WNQckqTpc0TjHLexpmc+q/UtPhho5Sx4oRdreMUnyZeOJv6rA8+F41JRbadtXwUl0yS/82P4HJ/J8O9a7ielqsIjYxosrQSX/mL8GGtHJf6a/A5P/pfN/wDr/wD4CEg0hy0sl/pr8A1p2v8ASX4B/wDS+b/9f/8AAQokqKGywuMW+5OvFAGe/Hrx3moYXEBxQ0FozBEoipRRYaFtE2ErSiInAuSiKlERKbW4O6Hzj7CmjXGuJsDbRykQ0C2dmNJpqmxqmVkw0zolSspnXYtO0EjQ+uYDVjKsFoDIki30nVLTar4c3WPLS+j8P+AiSETWzKxq512Kevm9wGk906fqI0mb42kwz5bgrfutmOq+Nmej0uuTa2aAk+3jgKm1T2Fzi99k17FKhWRRycOmValGbT5/eWm4p00kwZw7la5XAqcvHYkn4oPJjuDXhnY00knyhySaom3sK3qti2LMXaEJNZZKtkx0b2IlIOpdYXJPdKyvpdbDKnFtWhPWdU8OBwT+aZgYM2TC1NSbSe5hvzfnXA9XNq9vKFTaSRV0utjqFFJ/NW6GZpqKSb8lTySzqoVqZXUU92wcbqaS8IW59+Ry8LZEfEUE5t8cGd1/bSQzPl+JnjBPaK3LmlSScqMvBc25PmTs2NPGkl68kTXVWL2BPstrdjGrR0UkkkSejjPM8YW+1fKqVIrOr3/AtZbbpIWoJO+X6sjq++i4Qcnb2XoWYJJbVXsAkrGK+ECUtpIFbIlqnfLOatCtClqE5Nt/cZepx2mbWWFoztRDnY5ddpV5jX4U09jzmrxuEmez1eK07R5vqGn52JnpFYy5CiC4tMKJpUNMiRJEgIqfBVyeSxkexWmxhXnwJG5GKLhuuhkZi2cmPgWVMYp+5UjJjVIzuQsqZKmITsJPYi5JYUglIQn7jIvciwLMB8FZXxssY2YaM2KGIWmFZlQHK94/U1tC/lSMbM6nD6mvoJJxW5n5p/gM/Wtj8FiLK0GqTRYg0zzq3hnKIcdiUFVoAqzx27K+aGzL8olbMudhz0VjIV48yktqZ6nQ5I6nSdmRJpppp+U1ueb1ENrov9E1PZPsb2Z3+HfvrPN5ePOa/SS0esy6aV/I6i35T3T/AAKjR637WaTux4tbBbx+Sf0fD/Hb7zyjR131S1OUK5Lmm5KiW5b0y3NP6Q1cKtFpIrYPBcitjl39XAtAtDWgGtyIaYKkLyq0Pitgci2GcY2rhaZlZobM3NRC7M7Pj52Lx9FYeaFMrSW5o6iG5SnGmd2b6Iqg4OnRzRFUV9CzCQ+OSigpNDFkJsOVoLJ7l7o2ifVOp49NTeKPz5n6RXj73sYizeFu3sklyfS/sx0p9L6cnmilqc1Ty+3pH7l+1scz/a57a85ww4ZTm1HHBNt+EkvyPlXXeqy6j1DLqLai3UE/EVx+f3npPtx1vsj/AIq00vmaTztPhcqP38v2r1PCN7NsqzqdX+kN77si16o+lfZPommxdDw5NZpMWXNnvK3kxptJ8Lf2p/eU/tv+gaHpsNPp9Hp8efPPaUMSTUVu3aW1ul+IyufXXgGj1X2D6O9V1D/GWaP8hpm1jtfrTrn6JP8AFoyeidF1HWtYsOJOGGLTy5q2gvRerfhfwPqeHDpOldOjjh24dNp4ct0kly2/X+JXwZz/AGsZcuPDDvy5IQjxc2kr+rFfp+j/AObwf2q/M+Y/anrz63qljxprRYW/hxa3m/8Aea/cvC+phxxw/wB1fgLkh3b7X+naP/m8H9qvzO/TdH/zeH+0X5nxuGKD/wBBfgW8eHH/ALi/Ai6kH7fWf03Sf81h/tF+ZYTtWnaZ8gz4cawSagk69D61p/8ANsX9BfuQS9VL15T7Yq9fpv8ApP8AeeerY9J9rleu0/8A0n+88/2GG/8Aao19U9Wv5Cf0Pqmn/wA2xf0F+5Hy/WRrBP6H1DT/AObYv6C/cjXx/Dwwuufaf/FGu/Rv0KWe4KfcsiXN7VT9DKf2/S/2VL+3X5FX7br/APG1/wBGP72eVyeSpr3wrq9eyf8AhDiv9kz/ALdfkcv8ISf+yp/26/I8M+TkX6L9176H28U+OlyX/fX5FvH9r5ZOOmtf95fkfPcE2mbGly1VnP5d6z8Oar2UPtJklx09r/ur8hq69la/zB/2q/I89gy7Lcu457HFr+T5I0la663mf+oP+1X5ErrGd/6g/wC1X5GdCfuPhP3Mr/M8prq6rnfGhf8Aar8h/T9c9a8qeF4niaTXdd2r9CgpjehO8uu/6i/cdH8b+Rvyb5oNbL/NS+hVTLGZ1hl9CpF2Z/zv95//AAGHEIk4TA1YDQ0FqxcIhxFyiWGhbQuBVnEROJckhGSIiqq0LkOmqYpo38ekVASBSDSOzGkmwGpWKiPhujeUOS2OaDrYhopZEkJlEstCpRALnSM1Rnglwn3L6Pk091una9GedhkeHLHLG9uV6ryjcxZo5ccZ43aa8Hd4tfrJHqaezf3HNOtt/YV3Rvfknv8AR2a9LoZpN21T9yYqvY5zi9mDuuHaFafRNNNMYnTBTTVHNpEd4Ez2afqEmq2FzaeN+24nUZ1j0rknvVEW8DD6pkebVSbdqOyKKfgbOTncm+WKhu2eXrX610LWjn8PIpL7y9mn8Zt3slt9Sjig29kVOo9bwaKDxYmsubdNJ7L6s08ct9RrnNt5GlKax41bSSVttmbrOqabGknlUmuVHc83m1mq1028uRtPhJ0l9wh6d+Wzqniny16ni/g71O16NfafFjpY8DdeW6HQ+2WSMk46aC+smeUenkQsM14ZrPHhvP4fPsezj9sdRk2ShD6L8w5/aLWONxztP6I8XGEk+GizGU6ps2lXn+Jj/wDV6N/afXxdPLFr3ijQ0X2n+I0s0I7+Yv8AgeNtvZoKKaaatbj9DX8Tx37l9Q0ufHqIqeOdp+hbuo2jy32ZxZpJZJSfal+J6lLu8bCs5ePG/keOeLdzKKNNWc0Sklwcws9OcrJG0Us8LTNFq1RXywtHNqcNh6nHaexha7Baex6jUY+djI1eG72M7EV43U4ak2kVkt6NvW4Kb2MnJDtnwVKzq42A2c2C2UReRlafkfNlfJuhmrzdsAKXLBNA5kEs4A5bBpgIJCBqYaYpDERSHFjYsUg0yKFmDHwZVix0WYahrSkFYhMNS9zKwnZd3H6mlorSTXBm3bXszW0dOCMvL/qc+tTDK4otYylj2ou4+Eebqe20NXAxC06GJ2I3SWxUytXRbyOo2UM0krYypGZJpoq4JvDnTXqOyZFXJSzZEnd+Tp8XZWWnr3CGv6dPDk/VyQab9Pc8BmxTw5Z4siqcG4te6PY9C1ayY+xsyPtXpPg62Opivlzrev8AeX5qvwPRz7ite515+ty5p1umUy5p/Bp/TJq6fgux4KenLsTl8n1pE0Q1sHQLIhpgtgcmyYyK2F5eBw4zs/JSy0y7qHyZ+V1Zpk2fqIJtspThvwaGZ2Vpxs6s+i4pONAtFmcRLRpKOFNEUG1QvJtBteEVEvXfYnoT1OddV1UP5HG/5CLX68l/pfRePf6Hp/tJ1vH0fRtpqWqyJrFB+vq/Zft4LnRUl0TQJKl+jY9kv/pQ3LodJqMiyZ9Lhy5Eq7p403XpbQ2knp8czZZ5ss8uXI55JtylJvdt8svdB6Y+rdXwaam8Kffla4UFyr99l959UWh0ONWtJp4JeVjiv4FiEYQVQior0SSQF+UpKMUkkkl44SPk32n6n/jPrGbNB3hx/wAni9KXn73bPbfbLq3+L+mPTYZVqNSnFU94x8v+C+vsfNGk1XgC1f6fRemdX6b0T7L6CWaUY5MmFTWLGk5zb5de/qzx3XvtDrOs5O3I/haZO4YIvb6t+X+z0Mqq/Ci70npeXq+sek0+XFDKoOaWRtJpVdUnvuPvam230zktxkEbfU/sn1Hpeilq80sOTHBpSWNttJur3S2v95jwViqbOHY0XMUdivhjwX8MLOfVMGoh/k09vB9T0/8Am+L+gv3I+aamH+TT28H0vT/5vi/oL9yL8d9NMvNfapXrcH/Tf7zCcD0P2mV6vD/03+8xXAx3f8qVntRzYlkg4Phlt9a6zjiox1zSSSS+FHhfcL1FwxSkluket0fR+m5tDp8mTSQcp4oybbe7aTfkvx/q/BJXgeo6vUa3L8bVZPiZElG6S2XHH1MvImm0fV39n+kPnQ4397/MW/sz0R89Pxv73+ZrJz6VxXyV8nI+s/8AhfoX/p2L8Zfmd/4X6F/6di/GX5ll+K+Vw2ZoaadUfRF9mOhrjp+P8ZfmHH7N9Gjxoca+9/mZ7x+j/FeNwZKSL2PJseoXQulx/V0cF97/ADDXR+nLjTRX3v8AM5Nfxbf7XI85GY+E/c3l0vQLjTx/F/mEum6JcYI/i/zMr/C1/wBhsWEzsMdRgnklptQ8ayNOS7U7a+qNtaDSLjCvxf5hLRaZcYl+LHn+J5cXudBnYs2qprNqHkTXDil+5DoyG6zDixYHLHBJppXbKkJe5yfyM7zvm72mtp2HYiDGJmPTGccjhgDSFSQ5oXJCImS2EzQ+SFyQiVZxEyiW5REziKXhWEUEkEkTR1ePaLHRW4/GKSGw2OvNBvhENIlcHM3lUFq9hUojWC9xhUmgcOoyaedwdpvdPhjsiEKNzQTVz7gamPV48iXc1F+jGqaq09jIauaS9S5r29PpI9jp1yg8f8u6vLEz2t96fIcZbcniNR1HWqbS1M0k9qdEYPtFr9O0sjjnivE1T/Ffkdc3KT3amvUXkyKrvdGRoer4tfivG0siXzQb3X5r3GZcrq7tka0a89VBwac+TJ1fUF8H4La+Vv6lXNmcW2nSfj0MfPklLO3ezRjvfYuLuPUKSauqZYwuL5a9TKxqn9Sv1XVSw4Fig2nPl+xy5z+tch5naf1frjSlptG6hxKa5f09EeceRt22Kc23bYNnqYxMTkej4pMz0t49Q4cUPWt9UZ1hJ2O5j0Mefc9NbHqMcuXRYi4NbNGHGTQ6GRrhsi5duP5HfsazhFu9jo472KEM8o0220aGnyLIk0ON5rOvhixJO2Nx4VknFJbtpHJW6RsdC0L1Gsxuvkg7bo0z/wBZebyZ8eLqvW9O00cGmhCKpJIupERSSSXCJLkfIb1dXtccccNKGhc1aGgtWY7yFDNC09jM1OO09jbyRtGfqMdp7GFKvNazDaexhajE03ses1WHZ7GJq8O7dEVGoxrIbItgtmiATYmXA2TFMIZEluCxskKaNIEM45nFByCQKCQgNcBoWg0RSMQaYCJRFB0WNjIrp0MTM7AsJ7BKQhSDTM7kHJ7o2NE32oxse7SNjSLtSZz+af4nPrVx+CxCVOipjkqRYxu2jzdNYtw3QcU0xcHsNgQp2Z1jZjZ8m7NfMm4NL0MXLBubRpiTqdK2SbfBXyQbRe+E34JeBtcHRNyM7KDo+Z4dQk+Gzf67p/03o2Rw3njSyR+q5X3qzzfa8WVNbUz1vTcyzaZJ7ujs8W5aePnHzu97XDLumfAHVNL+hdRz6eqjGVw/ovdfl9xOle6Oj+kfK2NOXorgo6azQitkcnk+riWiGtw2D5IikrZCcvA7wIzcMIIz8+9mdm2bNDNyzPzeTbBqeR2xTWwUuQWdUMDViJosMVNFQlaSFySaafDHTQplwjI6vVwiox1moSSSSWWSSXotznqdU7vV6h3zeWX5imy50rp2fq2vhpNMqb3nNq1CPlv+HqyvZNf7I9In1TqC1WpeSel00k33ybU58pbvhcv7vU+h6rU4tHpsmo1ElDHjTlJ+wvQaLB07RY9Lpo9uPGqV8t+W36t7s8F9revf4x1H6JpZf5Jie7T2ySXn6Lx+PoFX8jG6x1HL1XqGXV5bXc6hC77Irhf+/LZRCZAmaGO6drZdN6rpddFusU05JeYvZr8GxL4FzVppjn0n23Ljw6zSzxZEp4c0GmvDTX5M+R63RT6f1DNpMn62KbV+q8P71TPon2M1v6d9m9M5O54U8M/rHZfsoxvt109RzafqEF+v/JZGl5W6f4WvuROvTTU7OvMYY8Glghstilp47o1cEPlRyaqYXqo1pJ/Q+iYP5jH/AEV+5HgdXD/JMn0PfYP5jH/RX7ka+H4uMP7RK9Vh/oP95jOBudfV6nF/Qf7zJcTHyf7UVUzYlODi+GqYP6Z1LDBQx9QzxjBJJJqklwuC1KOxVzRtMedWfCV8vVerxuup5/xX5FLN1zrULrqmf8V+Qeq+THKVcKz1+l+zHSNRodPlzaaTnPFGUqyNbtJvyb4uqXLXhP8AxF1z/wBVz/ivyJX2h65/6rn/ABX5Gh9sel6LpWr02LQ4nBTxuU7k3e9Ln7zz6Rt2otsaS+0HW3/tXP8AivyPX/YbqGt1+LXPW6med45wUXNrZNO+DwKR7b/B1/NdS/pw/cxS9Vm3r1mulKGh1E4S7ZRxSaa5TSdM8BDq3VWlfUs/4r8j33Uf/wAu1X/Rn+5nzHG9lv4MfJb/AErVa0Op9TfPUc/4r8hy6h1F/wC0c/4r8jLhOh0ZnPdb/wCp60o67qL56jn/ABX5DoavXvnqGf8AFfkZanT5LGPJ7mWteT/p9auPUaiUHDNqZ5U3dSaLGOfuZuPJZbxz9zi3+tXulStCEh0ZFOEh8JGSllOwhUWMW5RpYDQZDQAhoBosNC2hEruIqcC04i5RJoVGqIodOIFU6Hm8TYFKhkdgaJR240RqZLFphWdOdBzAYTYMjSUFTAgt2/QOTISqDZO7zJV2mh36hLxZPW51HtT4Q7psLyt+hR63O5v60Yfx56tE+PM6h1JspT5LeodtlWR35+JAnKElKLcZLhp0195raDXazJDI82VzhFJLuSu/r9DKas0dOvh6PHFcyuTf1/uDd9FU5s+Sbdvb0QGGM8snSbo6rbsdppqD3OX/APoF8GcWm019TznVs0p6uafEXSR7KGSE1tTPK9f0jhq5ZYq4zd7eDb+Pyb9t/F9ZHccmBVMI73fi0Vkpg2SmDolGmGnQpMNMmts6PUti/wBNdyaszEzX6TBNtvgTs8Wu6a+HDbVnquguGCDxOk5O0/c8tPL2Ko+C90zqFzWPI6a4ZpOc4y/l+LXl8de6RxX0edZ8Cle62f1LBUr5qyy8rjjjhk5kMk4mwFSVoq5oWnsXWJyRuzn1AxtRj2exj6rDd7Ho8+PZmXqcV3sZWJrwZDVrYOjqDrJXkmgGWpQtCJwaZU10yGhckPaFSRpAUcc0cWHIJcghIQEhiFoYiKQ0SiESiKBBJgoJcEgSYaYtEoXCWcL+ZG5pN4IwML+ZG5oZ0kqOXzz0caeOPBZgqoRidpMsRR5m42h8XSGxdIRBjk9jNSW72KmTCnNtcFtK+QWkgCqsNeCXjSXA+gJKxwuMrVwq2aXQc9S7G37FXVwbi/YT0zI8erVulZ2+DTP5o/7Y6RKWDWRXP8nN/tT/AHowdNyj3XVdP+m9IzYkrk4XH6rdftR4bDynwejfg3PbX0vg0Y+DO0r4NGPCOXyfRBsGtwmQuTJTvAjPwWGtivn4Y4Izc3LM/N5L+byZ+bydHjUpS5BfAUnuC2dMMDFyDbFSZUKlTFMZNimy5E03SaXPrtXj02lxvJmyOkl+1t+EvLPqfQei4ejaJYYNTzTp5ctfrP29EvCPM/4OIxll6lNxTkuxJ1uk7tX9yPcThDJCUJpOMk00/Kfgd9elSPD/AGt+0yyd/TenZLhus+WL59Yp+nq/uPG2vVfifXYdE6TGu3pulVf/ALS/IfHp+hh+po8EfpiS/gIrOvjaTm6im36JWFkw5ccVLJinBS4cotJ/Sz7HPLpdMrnkw4kvVqJ4n7ea/R6vHo4aXVYs8oSm5LHNSq0qugKzkeNbBOs4EV7T/Bvqu3Ua7Qt7SUcsV9Nn+9HrOvaNa3o+owpXJQ74fVbr91fefOfshqP0b7T6Nt0svdifva2/akfWOVutvoGvbTPuPlemjbTXk2MMKSK2bTfo3UdRgraGRpfS7X7KL+KOyODX0SE62P8AkmT6HuMP8xj/AKC/ceK1y/yPJ9D22H+Zx/0V+46PB8Ux+tq9Rj/ofxMto1+sq8+P+h/EzXEw8n+9NXlEr5Y7MuSWxXyrZkwqxtev5CaS5VH0rTw+Fp8WP/dgl+CSPA/A/SNdpMH/ABM0U/pdv9iZ9DOvx/6lHzf7eZfidehBf+Vgivvbb/ijziRrfabN+kfaLWzTtRn8Nfckv3pmYkXWV+pirPaf4PVWPqP9OH7meQjE9j/g/VQ6j/Th+5hFZ+vUdQt9O1SStvFNJeuzPmcMGdJfyGXj/cf5H1STUU22kkrbbpJCP07R/wDNYP7RfmFz1dnXzdYc/wDwcn9R/kGsWf8A4WT+o/yPov6bo/8Am8H9ovzO/TtJ/wA1h/tV+ZH/AJxP4fPVjzr/AMrJ/Uf5Dcayp74sn9R/ke9/TtH/AM1g/tF+Zy1ujv8AzrD/AGq/MV8Mo/H/AMvFY8lOnaa5T8FzHk9zN1eVPqOpaaaeWTTTtNWxuHIcHl8fEytjHOy1CZmYZ35LeOeyOLWeVpKvwkPiyljmWISJUsHARYYzdQDQZwwS4i5IsNC2ibCV5R2ETjRakhU0TSV1zTJRMo0yEaY3wkkpkHHXnZCIkdZDdo3mgVMhuoJe5MwZvZIjza5kq0OnJLHKRg9Ylc39T0GlXbpG/Y851N3J/UrweswX4wcztsQ0WM0GmxLVHZPiSntf0NNpRhCPpFL9hnSXJo5H8y9kv3Br4moUBuPTyljyZa2hVk4Y90kvU9F0/RRnppYmlU01deTk3ffII81p83dl7Gq97LOr0cdRgcZq01sxebSz0+scWnV0XsPFN7ejJvq9jXPqvC6zTPDlcbumVqPWdb6XcXnwq15XlHmMmNxdNUel4vJ+478Xs9E0TTJSJpmvXTmdckwkiUSkJvnIsatpG/0+Cx47fkwsezTfqbeF3BOLdPiybXX4Is5Xb2bV+ohZHGVp7pj1JNU0IyY3dx3QTrtz/wAej6D1r4eRYsz52v1PY4csMsFODTTPlUE001s0eo6B1dxnHDmlz6+TWXryP5/8GX/7njexOBjJSimnafARUrwHHHHAEMCStDAWjPUCplhaZn58d2a047FTNjtM57Cr5gkEokpDIwMbWIVADJiTXG5ZUPYL4dkzRsfJBxfAho1c+G1wZ+SDi+DfGukrSQFDWgGjeUwhLkiiVyAEhiFoNEUjESiESiKBolAoIRJRKIRKEDMTqa+puaLeKMPH+sja6e+Dn809HGvhtLYsxltRXxqqHJbnlbaw+G41ARVIO6MliT2Obs5cAvhsAhukC2heTJT5EvMl5LkK1OZ7NPyZqfZnTXhlnJlu9yjmnU0/c6fDOVlqvaaDJ8TTRp3seP6hpv0XqmfElUe7uj9Huv4/gb/QtQpwUb8Ff7TaesuDUpc3CT/av4no5vcqvvKjpvBox4RnabwaMP1Uc/k+lkxrYFLcPwQluYLS1sVc/DLjWxVzrZlZDMzLkz865NLMuSjmV2dPjUypvdgNjc0KbaEN0dcNDYmb5CkxGSexUiaCc9xLnuDknvyB3GsiVvS67WaJzej1ebT99d3wptXXF19S0ut9Y/8AVtZ/asykybGGlLq/U5/r9T1jX/Wf5iMmq1GT+c1Wef8ASyyf72VbZKZJDai3bVv1e5y22WwN1zsOwafPqHWnw5Mr9IQb/cIiziaJpkkPGmpKSbTTtNOmn6pmjhzaltN6vUf2svzKOGNtGjghbRjvVhxo6XunNSnJyb5bbbf3s1ca2Rn6KG6ZqQWyOO320hWuX+R5Poexw/zOP+iv3Hkdev8AIsn0PXYf5nH/AEV+46vB8qmZ1ZXnx/0f4me1saXVFebH/R/iUWjHyf701eSK2RclyapFTUSUIOTdJKyckZ0DT/H628rVw08G7/8Aqey/ZZ62Tai2lbS2XuZfQdG9LoFOcay5n8Sd8q+F9yr9ppqUXJxTTa3a8qzuzOThPjmZZXqcrzprK5tzT5Tt3+0KMT1f2y6M8ed9SwQvHkaWZJfqy4T+j/f9TzMIk1lZxMY8HrvsCqh1H+nD9zPLRjbPV/YRVHqH9OH7mPKs/XpeoJPp+pT4eKf7mfM46bFS/k48eiPpmu30Go/6Uv3M+eqGyFo9K/6Pi/4cfwO/R8X/AA4/gi12epHb7EdQqvT4v+HH8EC8ONO1BJ/QtuIuUQ7SLTaZZw5Cs1Ryk0ydZ7Ca+HJxuXsWQxcOWvJew5eNzh8njVK18c9izjlfkzMWQuY5nJqcaSr0JDUypjmPjImKORwKYRRuoBoMhqwBLQpoe0A0TSV5RFONFpoW4k0EkNUG40D9S874XAsi9wmgGjpztIJg8tBS9zsauaXuLza76JpJdmi+qPN67ebPT51WkS9jzesjc2dfj9SHr4yM0E0ynODT4NLJHkqZYJnTmoU5IvPdp82kU5xaZdwpbN8KNt+xWvia0en4XOaSW7/Yj1uiwqEEkvBi9Gw90VNqmz0mGNJGEx3XV5z/AGzdfoYS12HN2pxk3GW3qmjJ1OkeHO1F2k+D1mTGskK8ppr6o871rFOOfujs3TsflxydXScWNSTjKCaappnlPtB0h6XI8mOL+HLde3sev0MHlai3TfqX9V0p6rTSwZaaa2dcMw8W7L2NfHv83r5A4NPglLY2OsdLyaDUyhKLST2Mtw3PTzr9Tr1/FJZ2F1YaiSokpUU3zP8AqKpmp06bk/hv6ozkm2avTMDtzS42QcdPjnL1d7FVPZ+oKTTpqqHTg2rSprlAP5kk0OR0SuUIyV8NE/ClHtnG1TtNeCVBpWraH43tT4NJE61x6z7O62Wq0jhk/Xxun7m0eJ6Vq3o9ZGXEJbT+nqe0i1JJp2mrQf2+Z/m+L/z8nZ8ojjmQDjSQccIBkrEZIWiwwJqzDUD5XGI+EAYRHwicVrFyh7BdnsNjDgaoexHVKWTFa4M3UYLvY33jtcFLPh52LzrlKx52cGm00KaNHU4abaRRapnXnXUlNHJUMa3Bo06EoJEJBJE0CRKIRKIoEggUchUhpkrgElMQMjsza6c7SMRPg1+myWxh5f8AURvYnsiwkIw00h+1HleSNodBpoMTC0rHLizJaVfBGR9sGwoi9QrxNLkAzc2ZOb3K8sl+SckKm0LcNzfMjK0Mpsq5pOti6sTb4Olpk48G2NSVNlP6Dmay02ei6vh/SOl5KVyilNfVb/us8toW8GpT4t0eywtZNOk901T+h3eOyrx848pp2rTXk0oP5UZ8cbw554nzCTj+DL2N/KY+UoeuDlyCnsFBWzmtWN8FTOtmXHsipm8lZDPyrkp5lSZey+ShqHszp8Zs/IrbTKWaPY2/BbyTVlbO7TOzJqc5pXuVMuT3J1EnF8lOU22b5ymicrZydtL1aQuybdp+jTNOE+gr/B0/PVf/APR/eNj/AIO8S/W6pkf0wpfxKq/wkZWtulQ/t3+QEv8ACLrX+p0zAvrlb/gjPmj9NXH/AIPunJr4us1UvZdq/gy7h+xPQsbTlhzZf6eV/wAKPK5Pt/1ia/k8Gjx//ZJv9rKWb7YdfzJr9NWNP/h4or9rTYuX/o7H0jT9A6Ppt8PTtOn6yh3P8XYWq6l03QYpQzavT4KTSh3pPj0W58lz9S6hq/8AOdfqcqfiWV1+F0IhBJ2kk/Umwfr/AIJLdhpbnJDIxtkWoOww4NDTw4KuGPBo6eHByeTRxo6SFJF/Gtgui6Fa7BknHL2dk+1pxvek/X3NRdHa/wDPT/8At/vIni3ffGkY2vV6OaS3apL1Z6zGu3FBPlJJ/gUsXTMUckZ5ZPI4O0qpJ+teS+2oxbk0klbb8HV4sXE9mzep754L0j/EotD9Rl+Nmc1dcK/QTI5t3urTJycMjQaP9O1ic1/IYWnL0k+Uv4sbj089XleLG6S/Xn/ur8zZS0+g0tbQxQXL8v8Ai2a+LH90harU49Jp5580qhBW/V+iXuz57j+0Oo0n2ln1HLcsOaoZca3qC4r3XPvv6mp1vqGTXT4ccMf1IP8Ae/f9x5XJHvyyTVqzab9s9V9YhLT67SKcHDNp80dmt1JM8X1n7OZtFOWfSReXTPelvKHs/Ve/4mb0XrGs6LNxxr42kk7lhbqn5afh/sZ7vpvWtB1OP+TZl8TzintNfd5+qsvk0frT5/jgen+xCpdQ/pw/czc1XSNBqpd2XTxU3zKHyt/hyR03pOn6Y836PKbWZptTadUq229ycyyiTlWdb/mWo/6Uv3M8Eo7L6H0HNBZcU8bdKcWr9LVGIvs1iS/zqf8AUQtS07HmqBaPT/8AhrF/zU/6iO/8NYv+an/URP5qeV5ZoCSPVP7MYn/rc/6iBf2WxP8A1uf9RD/NH5ryUkLZ69/ZPE/9cyf1F+Z5zq2iWg189NGbmoJPuapu0mFlibLFSE6Zdw5ONzPfIzHNp8mW8diW1hycF7FPjcxcOTjc0MOTjc8/yY4uVq45liEjPxTui1CRz8aSrcZDUyvF2NTAzDmQmSMwtWA0NYLWwuAloBoe0A0TYREoi5Rrge0C0SCK2IcRrjRFFZ1xNitkVKztOryr6h51SVHaSN5V9S+91CaOr2wpex5/VwubPQ6xfIl7GLnhc2d11zkGmTkx3exVyQaNbJj9ivPDd7GudoY2WF3sFnn8PBBJbySb+i4LmTTqwup6L4em00rT7sMXS8Xv/E3l6OPQ9CnGekxuLW6R6DHweI+zeqcE8De6e30PYafKpRTsqK8d9cXEZ/VtP8XA5JW4r9hdU1QM2pRae+3BW5NZ4tg9JjeZJ8pnolwY+jw/C18oeE3T+411wYfxs/noYH2q6dHU6N5or548nzTNjcMjVeT7RmxrLinjkrUk0fKus6f4OtyQqqbOn5Xp/wAHfe5rICjG3wGsbb4NfpPSZ6uadNRXLfk1kep6nuqmi0E9RlUYqk3u34PTrQLRwUEri+WWtJpcWHH8PHBKS3t87FrLkhkhWROMltT8o0zGG/P75n4x82BNXBb+fcqSwtO0vqjVlFqV02nw/AEsSTTqk+UV+W2PLYpwioJKStPkKOBzlUZUvcKWJqb7d0wkpwacVfsORV1/covgtNJ8rz6nq+kZJT0UVJ247WeewwU5rfxwel6dieLSpSVNuw1Hlfzd9zJVwgkhmdeW4444QcQ1aJIfBOvhvmONbItY4lfDukXMaPNrCGQiNUPY6CGqJmuF9gjNiu9i8oAZMdrgJTsYOpw3exkajF2SbS2PT58d3sZWqwWnsb+PbOxjNA0PyY3CTTQujqlIKTJSJokXQ5EkEiJyJBsmxASZKYJNiA0zS0EmmmmZaexodOdyr3M/J/qI9PpZXFW9yynZRwpqKaLcG20eT5GsWY8DFwLXAUW2YVoYuCJK9mEtkc1bDgZ2fD81oWsPqjRnBPwLUEnuhy0uKqwpPgNwSjuixSXgCaXay5SsZeaPZlUl6np+lZO/TJN+DzWq5NboWbbtb3PQ8GvSJ6pPVMXw+pzaW2RKX38P9xON/KW+u46eHKlw3F/fuv3Mp438pfmntV+nR4GQFwew2HBx6N0nsVM75LOR0U875CUKeZ7NmZqZ1e5oZnsZOqfJ1eIKc5u+RGSTaDabYEo2dsNRzwU7KE4OLNecL8FXLiu9jfOiUKJS3GvHTJUDTsICQaj7BKFBqFE2gCgF27hpBKJNpBUBiQahYax+xnrRBSHY47kKDHY47mOqFjDDg0dPHgqYI8Ghp47o496VGl0rVavp+PLDBixTjkn33NtNbJVt9DRXV+oP/VtP/WZRwrZFmCKz5tycXFhdS18tvhYI++7Ilkz5d82Vz9kqS+4GKCod8uterTQ0A1uMYEiTL6Xq46Setlkttzj2xXL2f4L3K+u1OXVT7srpL9WC4X5v3G5ElbS3KuXybfu2cTWbq9ov6GMo3Jv3NjWuoP6GXFWy8s79HCF+Bv6NCbTaprhrlBY47FiCK6Z2m1fUtMksGvyqK4jNqaX42X8fXOrxSTnp5+7xtfuZQihqQ/1TX/8AH3Vf9zS/1H+ZH+Puq/7ul/qP8ynRDQfqjtXX1/qv+5pf6j/MF/aDqq/0NL/Uf5lNoXIf6o7V1/aPqq/0NL/Uf5gP7TdVX+hpf6j/ADKE2Jk9g/VT2tN/anqqf83pf6j/ADMrX6vLr9VLU51FTkkmoJpbKvItsU2O20rbQkp0yLORNSs4Z00aGCd1uZUdmi9p220c3kz0Rr4ZN0Xcb4M/Bwi5jZyXDWLkGOiyrB7Dosi5UsJhWKTCTJ4ZlkA2c2SHMBolshskBYDQbAZNAWgGqGPgFkjitm8DtAryoTmW5Z6cryGvi97ib9WtWtvuMnJG5M1tW+foZ1W2dnkv+QqtKG1sr5FRcyFecb3KzS4oZY87FbU5J5Eu+TdJJX4SVIv5IFXLjuzozRxnYsr02oWWNquV6o9ZoOowyY01LlHlc+J77FfFqc2lyfK24vlG0rO9zex9DWsVchw1abqzx2Hqjmldp+5e0urlkyqraRF8nPR/t6iEE86yrzGn9S2ijosjap+UXkaYsaQSR8967pnk6lkUIuTcqSStvc+hIUtPhU3kWKCm+ZUr/E3/ADbyujweb/x1+uPGdO+yefIoz1SWKHLV2/7j076biwYYxwRUVFUjRohukO3h+T+T5PJe2sLLjnGpdqckqdeRkcK1WFJqsiVqzQy44Td8P9gl4WuE0/DW6Lz5M1U83Z/8sx4GsSUopO655FTwzqPbFNPzfBswwxap7+toL9HiqpUayytZ/I4wYadKbe9+gxYF37Ld+DXjpF3tpNt8+g+Gijdza+iH6PX8qKOh0Clk+JNVFftZspUqISUUklS9CSXD5PJd3tccccKoQcccQHEPg45vYi30b5lhfBexblDC+C7he6PP0xi5jWxYghGLwWYIyqxqIM4bDoomUdiTZubHtwUM+K72NnJC0ynlx8lSpsed1Wn3exnyhTo9HqMNp7GTqNO92uTpxv8A6zs4oUdQbi06ZDRt0gUSSwWASRZxzAnWSmDZ1iA7LvT51kKCZa0TrKvcnc9G9dpacEXIpJooaGdwRoLg8fyTla5+GrgZBV4E20MhK0YVoagktgL3Di7QAE+QGtg5cgN0ADwhU2dPJu0txE8rW7TKkTaTqIppitFqJYcqUXSsPNkTRUxNvMkt9zv8HWd+vRavPLPo2pb1TX3FXE9h2PHKWmVp01RWxulR0eafKc7fqxjdj48FbG9y0tkcO/qwZGUM73ZdyPYoZnuxQKmbgy9SrZpZWZ2fefsdXiCo4WC4FiiGtjo/RqcoewmeO/BelCwHjvwObDNlht3RCxVyjSeHbgXLHSNJslJ416HfDPTaPR9Ix9AwdQ6hi1M5Zcs4Vhl6N1t9ER3/AGXf+qdS/rL8zT2Hmvhv0Djjb8Hou/7Mf8p1L+svzJWX7MJ/5p1L+svzEOMGGP2HRx+xuLL9m/Gk6j+K/Mt6HT9C6hPNi02DWQyY8TyXllS2+jflozubRx5xYm/AePDT4LcMdpOh0cPscmtp4XhxtVsX8EN0WOkaXHly5c+oinptPG5X/pSa2X/v2DxxU8jkoKCbtRXCXoY7lklv9qh+KNJD4qgdPF5NZhwRVpu5+0Vz+X3l3Ngm80vh4JKCdKk6fuVnF/P6UQgqDWDL/wAKf4BfBy1/NT/Af51/xRTFyHThOFd8XG+LVCZFfARkexUyvZljUScccmuUrO1Ci+k6DKoJTyQbk0qbfua5z2WprD17qJSxx9i1r3vRb0PT9Jk6Zj12p1602Oc3BKULVpvzfszXMt+I4pwSSHRLi0vSFx1zF/Zv8w1p+kr/AG3i/s3+ZX4p8VYjUW4aHR5dPny6TqMM7wx7pKMK+nn2KSZNln0GEN7DOm6jCta9JrIReLUKoZGlcJeN/R/vA1uOeknkx5FU4efDXhj/AD6BbYqTNWebS6Xpugy5OnQ1E8+Luk+6qe35lSXVdEk2+hw2/wD3v7h/n/5JQmyvNm/k1PT49H0/UF0iEvjTcPh/Eqqve/PBnvq/TVz9n4P/AL39xX5KxlyYts3seTp/Uej9Sz4emQ0uTTJJPvcnb8nnm9ws4mivySgLCTJpGx5Rd0q3RShyi/peTDZxqYU1FMtQYrFG4IbFNMysaHxY1MREamZ3KjkwkxSYSZlcg2zrF2dZFyBNgtguVAuZjTG2C3YHcc2SBNkNgNnWLgLybtFzp6+dspS5L/T1yzbwz/OJ/tOrl8zRTSuyxrXUmUseRW0zp3P8hfqJ8ipKxmTkW2aZhkziV5wbLcqYHZe7NYfFJ6ZS3ZVz6WCfCNXI6Wxnaht2awXMJx6dJqkafT8aWQo6fIm6fJpaVqM01wc/l7Kz/L0OlSSRfRQ0k00qL0XaN/FVwaJSIRPg78lUMVkdIayvltIx814CJZKYWPJb5Ks21Ibhe55/7v6T/a/B2NQjG+B64PS/j30pxxxx1E4446xdNxxxxIQQSQRQ5gNkt0A2RSr5phfBewvdGZhfBo4PBw6ZRfwlzGuCphWyLmNGNaQ6KGNWgYoYkSavOGxWyQsvyjaEZIAGblx3ZnajDzsbU4XZTzY7vYqXibHndRhptpFNpp0buow2nsZeoxNNtI6ca6ysVGQ0E0QakFkEtEMYQcccwDkx+mlWWP1K9h43U0+As9G9f0+ScE0zUg20jE6XO0kbePhHj+ecrXJ3KDilQpbsarSo5Vwa5GWKTaYadoDQ/NlXNOtkWZNUUc0vmYyqO9IXkmn6FfJN2xTk75NcxFp04pp7iMEezOnzuF3NrkBSamn7nb4kX69ho0paZKk9jGzx+HqcsPCk2voafS53BK/BT6rDs1aktu+Kb+q2N9e8tL8Jwu2i54KGnfzl5vY4vJPYhWTgoZvJfyPZlPKk7FIbOzPZlDLuzRzxKWSDbbR04gVyGw3FrwLa3L6aHuHGKZCVsao0gAHEVKFpllxs7sVU2lY5TamizafT/ZLSPVaJauL1E0oOfbTt72J/xj0r/wCX4f27/IHQ9T1Wj0UdItPpM2OMnJPLFt23frXksf461H/p/Tf7J/mb3c/6RD6j0r/5fh/bv8jl1Dpf/wAvw/t3+RYXWdS/9ndN/sn+YS6vqX/s7p39m/zF+5/0ELqHTXx0CC/77/I0ej6rRZ9Tqoafpi0uVaeTc1k7rW21fh+AiPVdS/8AUOnr/tv8x2LqWp7pf5JosfdFxcscGnT97J/9cz7QzYYqS28DHBpKMI905NKKXlvZFiOPY0Oj6b4monq2rWK44k3Vy8v7uPvOLGb5NcIOpxrSaXF0/G7cfnyyX+lJ7/8Av7iNLp3PFkyppQg6bfl+iHz6dq8km7x983bfddXy6GaiCWBaXTuoY1Sfq/LZW89t3ucn9KK0d4tJqNa21LJ8mNrlJctfff4BRw6ieDHmx6/PPHJJ2p8PymMxqb02PFkUagqSSpFbHDPodQnpYvJjyOpYfX3Xo/c1zuW/iUCcdRFf57qG3skpbt+hZwrNo05Z9RlzZpLbHKdqC9X7lmWKOJzy4UpzWy3T+HtuVGnbbbbbtt+WXda8c932YZylKTlOTk/LYnvxfGWKeWGO033TdIbIRlhCf60U/qjLN99puzYNNkg4/wCMtKm1W81+Y3VaGGHp+j02bWYMTxRce6bpS44soQ02PP1HTadQVOffLbwt/wAjR6itP1LJoscpdqy/E+G9nbVbffR14k/PqJYWq6Zp8ltdX0TaTaSlu9uOQMST+x2ktf6zPn6sy9ZjWPU5IOCjKDaarho29A9Evsnpv0/Jlhj/AEidPFG3dv2e1WVmSz0lkqK9F+A1RXoi4pfZ/wAarWf2X9wan0D/AJnWf2X9xP5oF0NVp+sV/wAKP8SqmanT1039D6nLp+bPkk8S71ljVc1Wy9zGnNQi5NWkr2HufALPBZcbi+fD9C5l1ePqPR3+kZIw12mXa+508kfDXq/4/UPBpun5njhHq+H4mSkodu9vxzzZR6np1pM2fB3KTht3JVe1/wAQ9z6GrkzaHH0bpf6fDUTbw/L8KtuLu39ClPVdAaalg6h+z8yz2abW9I6dD/GOlwzxYalHJNJ262q9uCrPpWnkmv8AHOgX/wB/95V734Ku5c3RofZ3SSli1j0jyyWNJrvTt3e/HJlvU/Zrzg6l+K/M08vTMX/hzSaZ9T0ijDLJrM5/JO29k75V/sMt9E07/wBudP8A6/8AeO/fhXq3pZ9Ln9nusPpcdTHaPxPj1bfiq+88ze56LFptN03ofVsX+M9HqcmoUXCOKab2fFXvyebvcWk6GmHFi0w4rczqT8fKNDSrdGfj5NHSLdGOjjZwr5EPSsTh/VQ9ENY5KhiIStBpBw0olM6iCbkJs5shgyIuCdJinLcmbpCHOmYbwOndxHcJczu8z/A6d3e5HeJcwe/dFzxjqxyzS0CqLZmxdmpo1WJsPFP8xPqlr505GSstSuy71OdOX3mI8tM67jt6m321VkU0c2Z2PPTW/wC0tRy2i5nipTA6VClNWF3qi5FwrN5KOZXZbyOytk4KhqbuLtFvTanhN7lPKJU2nae4az+oz09joNUmkr3NjDkUktzwul1rxtNs9HoNbHJFNMxxbm8qJrlb6YSZXw5FJLcensd/j20cxOVbMcKyLYnze4GbmVOw8L3RGdbnYXued/8Akj+1/G9kPXBWxPYsx4PR/j1f9JOOOZ2E44g6yOhJDZxwrQ4hs4hsi0BboTOQc5UVpz3ZnaT5vge6NTTvZGThe6Zp6eV0cmkSNTCtkXMaKOGXBfxU0jGrPghiQMUNihGFrYVOFlhoBoVClkgVckLTNGcLK2SHOwiZWbHzsZmpw7N0b2XGtyjnw2nsaZrPUeazQ7ZPYUzS1enaTaRnuLXKOrOuxmBkNBNP0BZYCcyWQMBZKdM5kLkYei6Zka7a9Dfw5LSPNdKlaX4HpcCVJnmfyJ7aZW8avcamqFxdIKrOCxrBqmyX7ApNHOVCMGVuMWyjJt2y1mmmqRXSsaap5Iu7FNF+WO/AiWF3wazSeK9bENNNMsrC/QjJClwdPj2mxsdHnfbv7DutxdYp+jaf3/8A/DO6RkSy9t0a/V136GTV/K0/21/E7c+5xc+MfTO5mg3sjN0r+dmhexx+WexC8itFTInuWsjK83Ysw1DMrXBWa9jRyQUnYl4fY6JOQlFwsB40y68PsB8Fp8AaqsNboLsa5Ra7PY7sF01VQY7T67V9Nx5pYY4ZqbTfxYXVLxuG8e+wvVQrTZNvA87svo251HqOrwLSvT4tNWXBGc+7He75qnwVF1fqT/8AK0n9k/zLms1ut0uDRQ0uWEIvTQbUoJtuisuq9Vf+sYv7NG2/JJb/AJA/RdS1efp+uyzxadZcMoqFY6Tvm1e4qPUuqzVrBpmvVYG/4jtLrdTrum9Qjqpwm8copOMUtnuI6bny48k9HLXz0974W0mrb3TbXkV3bqSUjVr+qv8A8jT/ANg/zGx1nVH/AOTp1/2X+ZGaXVtPJrNrppeH2xpr60WdLPW4ks+s1c5pr5MNJOXu9tkTNW2z9X1/8GTknmzKL1EIxaTXyxaTK09HgSc5wSrduzQnLJlm55HbfC8L2QpS071cYanLGGOHzNS/0n4X08nH78nk5KBaHTR0WneVR7c+oXHmEPH3/wDvwPxwpcEz1GknklOWsxNv67L0JnlxRxxeLLGffJRTS4vyaeTG9X/4n/8AA6m5KEF3TfC9PdjEliTWN903tPJXHsgkkouOO+1/rTfMn+RVeoyafJJZdPLJidU4Va+4vEmf8c33/wBNXyYs+izvU6Nt3vkg3an9ff3LePJi12J5dPtJfr43ymTDVaLM6jnWOT/0cqp/tK2r0mXBlWq0klDLV7O1JejRrJZOb+BMtthORj/0iGrwLL2PHmTqcPf1RU1E1DFKT2SVkXPLwI0cvhw6hrvOOHwoP3e7/gU+oZJ4Oh9H1OJ/ymFd696otalPT9CwYHtPKnlmvruv3r8Cp1F//DnTE+Hjf8DqnqcTVX7R4oS1GHqODfDrMammvDpWv3ftJh/+jtJ//cz/AHsT0/UYtX9m9T0/UZYQzaSfxMCnJJyTttK+fKr3Rc0Oo0+n+yOlnqtI9VB6iaUFLtp297L59JlJ+waa9C6uqdJ8dCn/AG7JXVOl/wDoc/7dkfn/AOQPob/ybrP/AEY/xKLafJr9P1ei1Gg6r+h6B6SccK725uXdzX0rf8TCcg1PgN09LqmgpJf5RD96LP2iyLH1bVOXHev3IpaWV9V6ev8A+TD96D+1Mv8A8X1a/wDrX7kOT1C/poa3pXR9Fm+Dq+qPFkpOnhb2fHBVnovs60761X/Yf5Dvta1/jl7f+VD+J57I9nt+wLZLwWvV59L0dfZnRYp9TrSxyyePN8Jvudu1Xirf4GQ9B9m3/t1r/wDx3+QWqf8A8EdM2/1ifj3kK+zmLTdShrOl6jFBanJBz0+Zx3TXKv04f0sr+y+1Z0vROj6yOf8AQOrvNkw43kcVhrZfX3PPxlaT9Tc+yMZY9f1bHki4zjo5Rkn4adNHn4PZfQWomrMXsNgIg7HwMaR2Nbo0tGt0Z8DS0a4MdVTXxfqofETjWyHxCNIYkGkDFBjNxD5JBfIwhugGyWxbYuEHI9mVZOmPyPYqye7IuSF3Ed4DYDZP4Brn7gqdtCnI6DbmvqV+PRNHG7o2dKqwGPhXBt4dsH3HN4p/9xWWD1Z1f3nnps3urvZ/eedmz0MxGvofiNMt6fUp0myhPkX3uDtM0/PS7xu9+1phLJaoysOspU2WVnTVpon8rmlqTtFbIyfjJrdick0/IcX0nI7sqydMfkklyylmypWky5EWrEZ+5d0eqnhmpJtryjEx5X31ZewzTSMfLjjG+3uenayOWCafJs45ppbngumap4cyi3UX+xnr9Fm70qdi8e2mNf1WlYue6CT2BktjfV7GihnQvE6Y/OitB1M4N+tIrQxPZFmJVwvZFqD2O7+Pfap8EQ2S2C2dmqHNnWC3RCkY/oDs6wbIch/odE2kLnOkDKfuIyZElyTdE7Lkq9ypPJudly3e5Unk9GZ2l14bC+DR0890ZOGXBoad8GGkxtYJbIv4ZMzNO9kaOF8GFWvY3ZYjVFbGPixKG0C0GnZDQETKImcLstNC5REShkhzsVMmPnY1MkLRUyQrwBVk58KaaoysmmSk9j0OSCa4KebCm7o0zrjOxhz0/sIlga4RtywewqWn9jWbTxiSxteAGjXnp9uCtk03saTyQM9oGi1PA1whMsbW9FzUpL/TZ9rXsz1WkyXBbnkNE2ptM9HopvtW5yfyM9XmtiMxsWVoSQ3uPN1GsNlOlYmeSkdOdIryn3bWQLXNuTHYoXVgwjZYxrYCgHBAvGvQsdtsGSoUVxWcK3QjMqTLrWxVzxVM38d9ppOiyKGoX1PTahfG0ORLdyg1+w8jFuGZP3PW6KanpFe9cnp4LLz+le9+peTtFHHB48s4PmLa/BluLtGXlns4jI9hLGTYpuycw0NEpWQtw0jUI+Gn4IeFeEPig0k/AGovD7APE0+DS7E/ALxIVgZyg14FauH+TZNvBqPD7FfJjwSmsOonLHCSdzUbr2omT2F3U6vU6fBo4YMeCaeng28kW3deNxC6jr/+Do1/23+Y16nR9mOPwdTqXjioJySgqXHoctXlX8xotPiXrO5v+Brvd7/tJDHo9TqNbodbHJhxxlGUUvhRa7vzKuo0acezPjabVpNblrSa3U5sGrhn1CjkTSx9iUWl5onR5pJTyZFKedSahKbtRXr9eTPy/nVl7/QTp8j6Zo1HqWR5U5J48bj3SgvVv/3RZnjWX/KMeRZsc91NeF6Cfh97byPvcuW97ExxZ9DN5dG7g954nw/yfuRfLnyz8X1AuKBPwot24pv6DITx5sccuOLja3i/DF4smdKSenUpW6cnSrxsYTH+Vlpjjhg3Sgn9EO+AlHeCSXCYqtTP9fN2L0xqv28h48McbbVtvltttmnPHJ6ttPiFlxvM8TklNK0m6v6DJY2uVsKz6XDn/nIJv18iFpdRgX+TaqaS/wBGe6/aXn/zs5fRGZtPiyJrJjT+qFrHHHBQgqS4Xoc9ZqMe2q0imv8AfxOn+D/MnHqNHqJKOPN2TfEMipl/+d//ABvQRJJXSKeoxvUZsOmXOWaT9ly/2JlzJs2n4dFPLFrLHLGcoThdNOmrDF5fYD1vKsmbMl+rBdiS8Jf32UOrT7fs50r3xv8AgHq23jm27bTtvyU+tZ8cvs/0qEMkXOMGpJNNrjleDpz7lqawk1bdKzei/wD4L0j/AP5U/wB7PPRez+h6Dp+o6Zn+zmDQ6zqC0uXHmlNr4bk9264+ppJ2UozlILuL60vQv/X1/YM56XoX/r6/sGL80GdAlej62/TAv4mW5mtpsnR9Bouow0/VVqMmpxdqh8Nxpq6397MCWRJNvwPUKtDpOny6zrekjiVrFkWWbfEYpptv933lj7W6TJHUy6hjay6XUNOOSDtJ0lTa+mwK1mn6b0F4dLnhl1utV55wd/Dh/u3671979il0brC6b36PWR+N0zNtPG1bhflL96+/kcn9D0t/aHqOLUdYx6vQZ1NQjBxnTq074Ymf2s62ntnw/wBghmbpXSZZJPD9otJHG3cVPdpejd8iX0bpj/8A6k0H4P8AMfNF7R/4u65x+kYP7BHf+LuuLd6jAv8AsIj/ABL0z/5l0H4P8yX0TpjVP7S6D8H+Y+aHtudH0HU8ep6l1PqMMKWq0jqeKSak6u6XqldniYcL6GyuidMim4/aTROlwk239FZjpb7ceNidlT8bLGPwVsaLONHPolrH4NLR8ozcfg09FyjDRtfHwvoOgJhwh8Co0hseAgVwEM3PZAvglgvZDgAxcmMbEz4YyIyySRXbTYeeVWVXPfkRHPgF7AqZzkmhgLZOJp5EJlOmHp3eVIWviWzgVtI2ce2D7jH0quSNjjA/ocfg/wBq0y8z1iW0jAk7NzrLpP6mA2ejhlfoZPkTJhTl4Qt7msSFtp7MmOaUfILBZXAsLVOt2BLVP1K7AYvzB2mTzt+SvKTbJZDKkCE6aaLunyXW5SoODcGmidTsJt4LklR6vouR/DSk90eZ6fDvxKXqej6cqSPOuvzvh5nXoYO0TIXidxQx8HVNdjf+lTMrRT4mXsvBSmqlZzeX6iruB2kXIcFHTvZF2HB0fx77ip8GwWwmwJukdnkvDKnKkAsm4GaVCFlVuzn6nq13+4Msi9So8tcMXLN7h+i6sZM3uVsubZqxGTN7lTLm9ybsunZM3O4h5LfIh5LJi7ZnddJ47C72NLT8ozNPyjU0ytorYjW0/g0cPgztOqSNHC+Dmq13HsPiytjY6LEZ6ZICYaGENAtIMhrYRK8kIyQTVFpoVKIuBn5INWVpL2NHJCynkhTAuK/YmC8SfgbRND6XFaWFPwInp1vsaNAuCfgctTxkZNMvQrz0r9DbljT8CZYU/Bc0XGJ8L4U06o2tA7girqsNY2643GaDIqSHr/LJT1WxjdVuMc68lZT2s74ls4dZadNyTb2Bxptg3bQ/HDdGN9D6fjhsNSpkQVIJO2QuD8APkLwQ2kEMuSK2VXZackyvlSdm2U1mzSWQ9N0md6dx52PN5dpG10XLdJ/Q9Lx31EZ+q+uh8PqGXalKpL71+YEJlrrUHHPjyeGnH8Hf7mZ8Z7leSHDZMW3vRzlYN7kScUbEbFCoeBsRmYkMSAQ1LcDSkT2hJE0ALr2IeNPwhrRFEUEPH7HKPsPqwXAw1ACMEnaSt8sdGCIjEdFGNDox9hqR0VsGkBoSS4VBpHJElcPjkiaOSJLikUQwgWMgSWxVy4sbabgm1w6LcuCvkHLxKrNlTK+S1kZSzM0ySnnaaaZ5zqLSnSSX0PQaiVJ/Q83rZd2Z+x1YTSU6GJr0QlMKzYju5eiIcl6IV3EOYAblQLmLcxbmHCNc64FOd8gOTbIsfCE6fgGl6I6yLAJpegSSfhAodjg5MVodGF8IdHG34HY8O3BZjhS8GVqplXhj4LUMfsHHF7D44/YztV+QY4bo0dJFporQhT4L+ljujGp4vw4HR5FRWwyPJUVD0EDF7BFGhgsJgSKBbYqb2GTaEZHsMlXUPkouVMtah7MoSluHCM7/AHOeTYQ5AynS5HImpyZN3uWNBPuymXkm2y90l3lZPknMVPfb0+jVtGtkdYH9DL0S3RqZnWnf0OP+P9raPIdbnSf1MFybNjrj3r3Mbk9LHxhfoWQEQ1sakCQDDYEgAJC2GwGMBZATIoAhKw8cG2kdGNssYYJ5EiNXgb3TIdsEvFG7pV2tNcGToMbpJI3dPjpKzxPN+rrsaZ+NHDNOKHN7FbFshzex1ePyX8+1k5mU8j3LWVlObI3squaZ8F6BnaR2kaMDq8F+Hn4YLnwMFz4O3yfDihqJNWU3Pd7lrVcMy5zak9zi1riKbLIJnk9wZTvyKk7Mr5E9DPK9yvKTkxrVgqG5H76Awi3yWccOAIQLEFwXKHhcKqjS0zVoz8KujS08ODfZxq6fdIv4uChp01Ro4VaRzVSxAfEVjQ5IQNQSBQSGEnNWccMAaFyQ5oCSAK8kV82NNXRbkhU1sIM5waOSGzjTYpqmSTqOaOTOsABoFwsazqH0cU9RjUsTVcoytHkcJ0+VszfnC4s8/wBjhrMkPSTNce5Yz1GvCdrkON2IxXSLWONmG5wQ7DG2WYRpoDDFIsJVTRyaaRPBKVbktWia2M1Id0JnJt0hz4EtXNsAVNuPkTPNapssZINop5cTbs6McTScsk2Xul5FGa3p2ZuWMo+BmhyOOVb+TvxPTOX29J1iHfo+9cwal/B/vMFOmelSWo0Tg91KLi/vR5VtxdPlbP6m19xp/ZrlsRGVsRPLSJxZLMzXoMdBlXHMfB2I4sxGxExHRAzEEkCglwI3Uc0EkTRNICVEpIKjktzKhHaHFbnJWElRjqAcQgYoNIUU5BJEIlFw0kkElw0MEJgsCoJ8FfI9h82Vcr2JSq5WU8z5LWZ7GfqJUmbYTVHV5KT38HnM0u7K2bGtyVF/QwpStt+p2eOIqWyLBb3Is14B2C2RYLYcDmwGzmyCgIhs5ENgSbIvcGwo7sKZuOPc6Ro6fDsthOkxXTrdmrjx9qRjqtMZBDHS4GqASW4aRlWvHRgMjE5IYkZ0cTGJc0y3RWii5pluiEWLcUGiEgiokcHsMFxe4dlBz4FyYbYtlQFyZXyPZj5FfJ5KKqOpezM5vcv6p7MzZPcciUt0hOSfgKUhT3LkTSsj8mn0dXJszJq0a3RY7WZ+f/8A50p9ep0S3Rf1TrAylolui5rXWCjj/jtp8eK627ml6syUafWXeVL3Zmno4+MP7RQLWwYL4NCLYD4DYEgBbAYbAYw5kJWzg4rcAOCpXRa0UHPOkkVvBpdKSUnJ83SMt/Dek0OFRitvBrYkkkUNGrijQxujjvjjaT0dFbB3sBFkyaoxuOfARllyVMj3HZpFZu2c+u9Jb0bNOD2Rk6R/NRq43sju/j34MnATWwS4Ilwejv8A1VGdqlszFzJqbN3Uq0zG1C+c8zzXiKrOyK2G9p3acfaRXaSojO05RLlCEh2NboFIZBU0bZoeHwR4NTTxWxm4FwjU0/g6NiNHDHgv4VwUsC2Ro4UtjCqPhHYYkdDgKhBKCRCRKRQEkccjqGHAtBAsYKkhUlsPkhM1sIKuSJXmi5NbFea5Jqaqt06OUgcyp2hSn6iHT7DTK6mNiwM2rTMDVrs6jLarSf8AD+BvxdoxOrLs1uOS5aa/B/3mvj+8Tr4uadWkXoQqilomnBGhAx8hZh2OI1LcCAyzk00FewSVoBRb5GqkiFFTtbLkFRa3Y1K3ZzQ5CLaQuUExrTBpmuSUs2FNMpwj2ZbXqa2RWjPzKpWjr8WuI1HounT7sFL0swOq4/g6/LFKk33L6Pf99mp0nI9l4K/2lw1PDmS5Tg39N1+9nXPcP+nn8srnVh43wIyJqV0MxvfciiL2NlvHIpY5bFrG0T1cXYOxyK0H7liLJ6ZyCQCYaYGNEohBE0kUdRJxnQKISQMQ0RYEoJEBLgmKiUSQiS4pJBxxQcwWEwWKlSplTK9i1kKmV8iQp5nsZmplszRzcMytW2kzbESxtfk+VoyWXtfK3RRZ34npCGyLOfBDZZushsizmwCCLObIbGEtgtkNkWPgTdjcKuSQpK2W9LC5oWvgjY0GJVb8IuOgMMVDEkTds5q6czkEg0gUhkURVDig0iIoNIikJIt6ZboqpblzTLdEIq4kSQiXwUlye4aewryGnsMkti29gmwJPYcBc2V8rHSZWyvkuJUNXLZmc3Ze1b2ZQbKiQvcitgkrJosiJLY2ejR+QyZI2+kxqC2MP5N/wKfXo9Et0WNe6xCtEt0H1F1iOf8Aj/G39PE9Wd6ivqZ9F3qjvUv6FM9DPxggh8EsF7Isi2A+Q2BIYLYD5DYDGHVYaQKCTFQKy/oMna0r8lAZhm1JURZ6N7bQ5E4o0IzR5rpurTSi3ujax5k4rdGLXOvTQU0ldgTyFR5vcGWbbky3BTMk7E3Yl5LfISZybyXVzSuppGvi3SMXTupo2cXCNv49PP09ES4CXBDPU3/qpS1C2ZkZ185tahbMyNQvm+88nz1NIrY7tCSDrY4yKo6hlAtFQBSGQ5BoKJvmk8Tg8GppvBlYXVGlp3wdOxGrg8GhidIzsEuC/idmClzG9hqQnGPSGEpbk0dRNFBCJOOGEMhok5gCpIU0Pa2FtCBEkV8iqy3JFbLHYmlVDOtrM/LNwbd7Gnmjs1Rl6mOzEiphmT8lnHkT8mL3uE6LeHPxbKueCVsQlaMzrMbnil6Nr/3+Baw5U0tyv1Rd2GMvRpjx6qr8Hok1BGpidmZoqcEaMNuDLy/U5WYBq26Fwe1jYcWctaGrgmtgU2wr2JU5W2S0SkqIdvYcAHyCwnswHJI1yVDNJoo54b8F1tNciMqTTN8fUaH0yfblo0us4fj9MyNK3BKa+7n9jZjaV9udfU9JBLJp+18NNP6NHbg8/HiZY7XApwcHwabwOM545LeDaf3MTkw+xnfpK+NlrHKkV+xxY2L2Jqou48nuWsc7XJmxdFnHNog19MYmVcc7HxYdM1MNOxUQ0xWgxHAp7hEUCXIaAXIaJMdEohEgcSiSESVFOOJOKCHwC+CWQxFSchTylyZTyrkUiKpZuGZGsezNfOtmY+t2TOjxorz+rVzZSexoZlbZSzRabOzPxJTYLZzZDZoaWwWzmyGxhDZDZ17gsZus7yRZMVbGDYK3waWhh86bXBTwwto09LGjLSpPbRiriSkdj3QyjF0QKQ2KBSDSJpmJDELQaM6mjXJb0/JUXJc0/JCKtol8EIlvYZUDYV7C73CTCJS2BJ7BSFSZUIubK+R7MdNlbI9i4Shq3syg3uXdW+SinbLiRpUjjr2IbGSHu6N7pkaxowVvJfU9H02P8mjm/lX/ABGfre0a4B6m6h9w3SLYr9UdRf0I8E/xa348V1B3qpexVHa2V6mb9xFnfn4wcDIIGTKhFsWxjYtlQAYITBYBBKZBxIGmNxryKW9D4qkiaZ2ObhJOLaaNjSa1TilJ00YsRuNtO06MdG9D8bbkF5vczsed0k2M+JZjqn1dhNt8liDszcM22aOE5thcwbTX1NvB+qjDxbSRt6feCNP431efqylsQwlwRI9jU/xNVzr5TI1C+Y2M6+VmTqP1vvPF/kFopIJLYFINHJCRQLQytgWioANEpEtHLY1zSeEx7F3Tzaa3KsUPxOmjr0Gxp5XW5p4HdGRpnwaWB8GFU0cbLEGVcTLUBwzKOJXBzKCAWSyHyMnEWQ2Q2AS2LZLdnCAGIyKyw0LmrQqSjlhszO1GPk18kbRR1ELT2EmvParG020JxTp02aOqx7N0Zck4T9jXN7OJ+NPT5OEO1b79M17FDBOvJccu7E17C5yqhnTZXBGrFWjE6fJp16G1jdpGXlnsodBNv2HpUhWN7Dk9jlrQcSWwYslbuyTFbOs7aiHwOQy5ySVlPJlbdJjdRJ1SKptiIo/iOuSHO0wAWbRIsc6yp+56TQz78SXqjy6dOzb6XltLfbg6c0ZB1DD26tzS2yK/v4ZSnj9jc1+PugpVvF/sZmTgTucqqz54/YW4UXpwEuHsR0dIWw7GwHGmTF0yaa1jdFmDtFOD4LEHsQcWYsNMSmMTEZqYSFphJkgyIxC4sYmBwSJRxwzSiTjhxSTjjioQWQwnwCxAnItirkRbmVcqCIqhnWxk6yOzNnMtmZmpjaf0N8VNedyQ3ZVzQtPY0ssPmaK2SGzOiaSyMiabBstZ4clRqmb5vQ5sFs5shssIbIbJbA5GaVux+KDdAY4Wy/p8VtOtidUQzBjpJ0XsKpgQgklsOgqZla1k4tYx6SaEY+EOTM2sEkSjkyVyKmJBJgphIipo4su6fkox5LunZnWdW0RJ7AuVIBytiTRBJ0LTCTAkti5Ow2xci4CsjK2R7MfNlbI9mXCZurdWU09yxrHyU09y4k5M6wLOsZGQVzS9z1HT4/JH6Hl8G+aK9z1mhVQX0OT+VfUh5+trSrYodWlSf0NDTfqmV1me0g8P+rS/Hi9TK8837i7Oyu8svqyGzunxzpIb2OsCTKAWwGEwWUAsFhMF7CCDjmcSBw3aRYRWx/rIsoi04NDIC0MjyZ01iPAxMVBjEzDRrOn5NPF4M3Trc0sXCOfYi3je6NrSu4IxcfKNnSP+TRf8f/Zc+ri4IkSuCJHt6/1NXzfqsyNQ/mZr5nUWYuqfzP6ni/yC0GLDXAmLGpnHwhkMiyE7ZUCSGiTjSB4VD4JpWKirZYhG4s69GuaZ7I1MD4MnTukjU074MKGliZag9ipiZaxvYIZ6exLIjwcywhgtktgsZBbBbJbBYglMkFHWBOkA1sGwWAJmtipmjaZekitlSdklWRqIWnsY+pg03segzxuzI1cFbsvN9oqjhnTLsJ3GjOT7ZtFnHPY01Di3omviNe5t4kqPPaefbn+pu6fImkZeWCfVxOhqtio7jVSOSxoNOtg1wLW7GLgXDcd4Is5ukOQ1fKrsqSjTLzV3YqWNMuekVUaBe5YeN2d8IuaLipNNFvpebtydrfPgDJj2YjBL4eoTfqb+Pafleua+JgaflUZkobGhpJrJgVO6K2aHbkkvc28nuSrUpwEzgXJRESiYEqSiLaplmcaETQh1MJFjHIqJ0xsGRVLkXY1MrwdjkyFGphJ7i0w0yQbFjosQhsQhnLgnkFMJFQ0olEI4qKSccjioSCGSyGKgqasrZEW2hE0LqVDMrTM7UR2ZqZVyUNRG7NM1NYOoi1NlfJG0X9XCnZSkjeVKhnhszPyRps1syTTM7NHdm2NEpyIsLIqYps6oEtkxVsFbj8ULa2C+gdgx20q3NbFjUIpVuI0eHiTRdUTDVbZiEg0mSkGok9aJgxqYCVHXQjPUglIr9xPeILCkEpUVlMLv9yKi1ZU9y3p5GYsm6LmmnaMqztXnK0SmKTsYuBEJE2DZ1gBN7C5PYKwJsqAnIyrmezHzZUzPZlwmbrJblK9yzrHuylZrPiT1Kwk9hKYSYwt6NXqEeu0aqK+h5Ppi7tSj2GlVJHD/ACr7PLW06rGYPWp1Gf3m/i2xfceY67Osc3fhmni/1itfHk5SuTfqyLAu2zrO6MRtgtkWRYyc2QySGARIFhMFkgLOOZwqBY9pItIqLZlqLtJmejhiDQtBoz6ZsXQxPgSmGuSKbR0+6RpYuEZum3SNLEtjl2ItY+Ua+jfyGRjXBr6L+bH4P9lz6vR4IkSuCJHuX/Q1bUP5TE1b+Z/U2dS6gzC1b+b7zxvOWwwYxMrwkGpnLxJ3cEnsJi7GplSAZxyORcN4rGty1BbblfGWcdHXozMCp17mpg8GZjVTNLA9kYaDQwvYt42VML2RaxhFLEeDmzlwQyicwXwEwGBBaBfITBYBB1nENiCbBbOsFsCc3sV8nDGt7CZvZioU8y5MvVxtM1MztMzNTwyomsTM+3IFCYOq/XFwlR087EruGX8qm/J6HSq4o8xjn86fuei0U32K9zHyT0GjB0PTTQiG6Gwo460hsXuFYtbB2Sp1pAt7k1ZyivQqEFtENoY4r0IcV6FADojbwE4bgOLXAgDItmZmeThks0sjdNUZWre5XjvNI09J0XP340m+UXdTH5k/VGB0DUVNQvez0eoVwteNzu+5sOe4oyQmcSxJbCZIwNWmivNFrIivMRENBQYLBumhURcg6HxdlPHK6LMGY1cWE7DixUXYyLJM6I1cCosbEZjQaYCCTKMSJITJRUUk4gkqBwLCBYqQZCciHPgXkWxKapZEUsytMv5UU8y2Y5U1kaqFpmZNVZtahbMycyps6M0lLLwyjmXJeycMqZVaNc32ln5kV/JayrdiO3c7M30ToRtl/S4nOS2EYcbbSRsabEoK2tydaXmdPxwUYpLwMSIQaMXRI5IJI5BoRhoiQbAbGKFsBy9yZMW2CaYpk9+wmyHIms6cp78mjo3aMdT3RraF2jDTP+2hF7BpiosamJQ0Q2RZDewB17gTZLYE3sVARN8lXM9mWMjKWd7M0hMzWPcqWP1ct2VLNp8SYmEpCVIJSHwNjoy7s9nsNKtkeS6DG237nrtKuDzP5N/zPDTusL+h5Dr8/wCSmeuyvtwN+x4b7RZKg16s6fF/R7+PP2TYtM5M7WRiZ1g2dYgKyLBsiwIdkPgGzrJDmcdZxIcizidxRWQ7C969SNGsINAINGdppT3GJ7gErlEU2ppuEaWIzdNwjSwnLsRbxrg1tF/NmVj8Gro/1EivB/sufV2PBEiVwRJnt2/4K/tT1b+VmBqnv95u6t7M89q3Tf1PH86NAjIYnuVYTdliLOeJPgOQiI1MqGYmSgUEuC4bxMJUWcT3KMWWMU9zs1DX1tNe5fwPZGYpqky9p5qkYahxq4XwXMbM/DOy9iZKlpcEM5O0cMkMFktkAQWCwmQxEF8EEsBsA5sCTObAlLYQDKQmcjpzoRkmBFZpbPczdTLZouZppJ7mZqMnO5eYms3VO5iUFmdzbBR1T4kyDaaPQdPncUrMGELNXpk6dPlGXkno69Bj4Q+Lor43cUOXg4bOKhqZPkGPIaIUklEElQ0nHJkNl8JDAbR0ntZXyZaDhGTSaZma3FdtFqWZ0VsuS00ys55eo1S+lyePUq3W57PE/iYE/LVHicTrKmvU9h06fdgSu9rO3J4LmhM0W8yqbrh7lWaMdT2pWmVplrIVZk8Ihi26DmxTYWA7HLgt43aKEHuW8cjHUUtxYxMRF7DUzNR8WNiyvFjYtAZ6ewSYtMNFQzEyUwEyUyjMOIOKNJD4OOfAUAYE+BjFz4Iqaq5EU8q2ZcyFTLwxSpZ+dcmTqVTZrZ/Jl6pcs3xUVm5irNbMtZSvNWb5JTyRsWsdstShZMMbb4OnNEHpcKW7RoRTYGLHSSLMIk29b5nHRi2NUAowoYokrK7SWqHdvsQ4B0yXwKk6HyQiSGOlti2w5CZOhpqXJIW5+4E5iZTt0KstVYg7kmbehfymFhe6+puaF/Kc+2caEWMXApMYmSsRDZIMgAGxc3sHIVJlQE5HsUM0tmXMz2KGoezNckytXLcrKQ3Vvcqpm+Z6SbZKYCZNjKvU/Z+H8kn6nqdKt0jznQ4Vpo+6PTaNW0eP5r3yLz8WtW+3Tv6HgPtFO2l6s951B1gaPnnX53nSO7xfYXkZFhJi0wr2OtmZZ1gJk2ICsiyLIsAmzrIOJJNnWQciTEmMxumhaDRNJbTtINCcLtUORjVQSDjygEHD9ZEUNPT8I0sPgztNwjSw+Dm2cW8fg1dH+oZWPwauj/UK8H+y59XFwdIlcESPZv8Aor+2frHszzesl87Xuei1r2Z5nWP+Vr3PK8zPQIclnGyrFjsbOYluLGJiIO0NTHKZyewSYKexKLlDwSY2EqZXsOMtz0bFNCM7iW9Nk43M2E9uR+nyU+TLUDfwT43L+KZjafJdGlhnZhYqNGE9g7srwnsMUgMbdENoFysFvYCG2gWwHKgHMRcG2KciHP3FSmIhylXkRKZE57clfJkryAFkn7lXJkryRkyc7lPNl9ypOpdny87mbqcmz3GZs3O5n5MjnJ+h0YykN72wo7sBMZBWzawLOFbFjA3jzKvIrGqQ5xqpLwZU+em/p8lxVluLsztE1KCdmhFKzh3PYh8OLDTAjxQS4MqsRyZB1bFQC8EMhWiXuiwXkexRyJ2y/KNoTkxX4DpWKE7SETtsu5MTXgS8Lb4D98TYrLZpno+k5rgt/ZmHPFS4LvS8zx5VFvZnT49dic+q9DqF8qfpsUps0JL4mB15Wxm5GVue+tScjKeRlnIynmZCaTN7i2zpSoU5pvkfCPxljGytjdss4zHUWtQY1Mrx4GpmNhw6LGxkITGJiUsRkNi7K8WNi6ZUM5Epgp2ghmNcEgolMYSQySGOmh8C58DGKnwZ1NV8nBTy8Mt5GU8z2ZMSoZ/Jl6ndM0875MzUPdnRhLPyCGh2R7gVZ0xJShb2RZw4kq23CxYqVsdGO5pGmYKER8IEY4j0gaxyQaRyQSEbkgWkEC2ALkkyvkVD5OhM2VBVeaKuV0WZuilnmkmOM7VfJkraxeNtuxc33S2DxqirPTGruHwbmh/UMPDyjc0X6iOXYi+hiFxGIlQwZBIhgZbQmaHtCZouBWyooZ42maU1aZTzQ2ZpCYOqhbKnY14NPUQuT2K/w16G0vpKpTQUN2l6se8Xsdiw3mgq8odvpNew6TFRwQXsej0a4MLp0KxxXsb+jXk8a3vka5+A6pKsVex8661O9VXoj3/V5VBr2PnfVJd2rl7Ho+L6nyfVDyEC1uSdTNKZJCJEHWdZxAgk444RJJIORNAkGgUg0RQODppllO1sVoplrDFvZmejEk2OxQ+ZErG/QdihTMrTXNOqpGjhKGFUX8KObRxbx+DU0n6iMzH4NTSfqI08H+y59XFwDJ7ErgGb2PW1f8VM3WvZnmNU7yno9c9meb1G+VnmeZnoEWPg6EIbBnLSWYMcmVovcfBpomUHxYaYpMNOi+m8EdYfYC4NHrqMhPYPHkqbViUqBcu3ITYG7pMlpbmrhnsjz+indbmzhnsjDUONTHPYap7clLHMfGRnYo/uOchfcR3CCZPYRPJToOb2ZRzZKb3Diae8nuKlkt8laWZeomeevIcT1ZnkpclXJm9xGTPtyVcmf3LmSpuXNvyVM2bncVkze5Vy5W20mbZwTs2VzbSewqyDrNpJCEixhVsrxLmGOwqaxjRYjG016icaLGNGdUs6DI18rfDo18bbS3MPH8mdPhM2dPJNLezk8s9p+VciHEXDdBrk51pXNhEIlclQJq0C00MSOaNAWwasY4gNNE0FSgn4FShT4LDFMzv0K8o7bicT+HmT9y3NWipOLU7NvEzr1Ojyd+Bb26KOqXw8so+LtfRhdJyJwSv2D6rClDIvo/4HX9yufGXkkVMstxuWZTyzomZSXkmIU7nyRlnyKi9zT8+iaGKZbxtNGbjnVFzHPgx1lUq9BjUVsc+B8XaOexZiYxMTYxMimsQY1FeDHxkq3FFQ6LoNOyvdvYdjexUBiCBTJTGBHEHAYXwKyPYbITNmdTVbK9ilmezLeZlLM+RSFVHUPkytTLc0dQ+TJ1Et2dXjiKrvdh4sdu2RCLlL2LMUkkjpOTrkhkFbQKQ2CG1hsUNSAihqQKdRxJzAIb2FthSfgXJjBc3sImxuRlXI6KTaVkmUNRO9ixmnSZSm23uVGeqWkOggEtx0EGkH4dmjd0S/kzExLdG3o/5tHNoRdjwNQqPAaZCzUcwUyWMBfIuSGMBlQEyVlbLDZ7FxqxOSFouEx9RjuT2E/D9jRzY7kL+F7GkpcUfh7cB6bDeoht5LTxew7R4f5dOuCdX1SsbmkjUFt4NvSKomTp1SRsaZVA8zxzu2kZnWZbPfwfPNa+7VZPqe961LaR4HUb5pv3PT8X1n5P8AZXa3ICaIqjoQ4444QccjiUiaTkiSaJSJtCKJSJolInpuS3Gwg2TjhfgtY8fGxndAOPH7FvHiqtgseP2LWPH7GVo4GOO1wMjCmNhANQIp8TiiXcK4K0I0WsSMNKWoco1NKvkRl4+UaulXyI18H+yp9WVwBk2TDXAvK/lPU8n+qmRrnszz2f8AnWb+uezPP5t8jfueb5PdZ0KGRAiEjn1CNTHQlRXTGRZlfQW4u0GmJxy2DscpvHWiaTOaOo9lQXC+CvmTUkXEJ1ULipLww6VO0cnaNvC9jE0a3Rs4VsjLf1UXoPgsRkVsfA+PBlVG2c2CcLhum9mY2uzdk+TWyOos811XLWRehcz1npMtQ/ViZ6i/JQlnt7C3mZrMI6uTz+5XyZ72TK7m35AtmkwByyN+Qbsg66L4STkRZKe4+AyCtl7EqSKeFWy9jWxnpUPxosQQnGh8DOqHOLeO1ytzR0T7oJ+pSx7osaKfZJwb4exh5Z6TfvWrDYalYrG00mNTORSUrYxLYBcjEXAlIlI5Ik1AXEFrYY+AG9hWBXy7K0VJZKe5Y1DpMz5NtkXPsqcsya3F5JpimmBJNMvOeM71r9KyVOrNjWY/jaOcVzVr6o81oMvZlSflnqME1OHrsdOV5+PJZJ+Slmye5d6tB6bW5cfi7X0e5kZJttmsym/QydslbEUSiuA2DplrHMpJ0OhKjPWTaWOfBYhIzsc+CzCZz6yqVdTGJlWMxikY3JrKnQyEm2ivHfyPx7GfFSrMOB0BEGNixwzkSD4JsZiRxCfgkAGTETY2TEZGRSV8zKGd1ZdyPZlDO+RyJrO1MqTMjK7mzS1bdMzHvJnZ4p6SZBUhqFRYxM24uGxVjooVAdHgFmRGLgWgkxGMhsiyGwCJMTJ7ByYmcqGXS8kinmnyOyzpOilklbLkRaTkbbFMZIWy0VCW42AtDYE1Kzi5RtaTbGjFxPdGxpnWNHPqexPq6mGmJixkSVnIJLYCIxAAtANDqIcRyghoGStD3AFw2LlClOFvgD4a9C1KG4LiUFZw9h+ix1kbolxQ/Sxq2Ru/4hfwrg18CrFZlYVbRrQ2w/ccXhn+ao8/1h2ps8Rlg3Nv3Z7TqzuM/vPKZce7Z6GGWvrPaAa3LOSFCGqN0AZxLIYByDSAQyK2IoSkSkElZKRnaEJDIQt2Co20i3hgRq8MWPHxsWsePjY7Fj4LeOCRjafHY8ZYjAmEB8YEq4GMPYLtGRic47io4iC3HwQpLcfjRjoLGNcGpp1UEZeNbo1sCqK+hv8Ax5/kqfTnwJzP5WOZXzv5Weh5PimPrntIw5q5M2ddLZmO+X9Tz9/WdAkEQd5MakaDiLTGJmWoZ0HQ1PYQmNi7RMN5aiKGUc0j2ll1udNd0GmEyGhFR6NUzYwrZGXpFUzWwrZGWvp5Wsa2HRWwrGth6WxFWmqIYVHNBARm/VZ5TrH84vqerzfqM8r1bfMkaY+stsuiKDaBaOhIGC2G0LZUJ1nEWRZXAKwlyAHDdhYFrAtrLkEVsK2Rbxrgxq4fjWw+KFQQ+KM6ZuMmX8nkjO6t02djXA3Lj78TrlcGevfoWdjQ0824J8lmLszdBk7saT8bGlF2tjis5ShqCTFph3sXDGmSmAjrZpAJuhcmw0rBktgClqHyUm9y/njdlKcGnwTamo+pDVkbhJWwmuFxEPkmmvDPQ6DMppepgODqy503O4ZVFs2zronpH2q07vDqUtv1JfvX8TzTR7vqmFarpmWKVvt7l9VueKlFPwdeb6GiCUG4EdtFEgNOgaYcYNk2A2EnsWsbK0ItD4OjHZxagxqZWjIZGZhYqLcGWIMqY7ZbhwZ2KixBjouivBjk9ieKOTJTFJ7hphwDT3Cb2AT3IbFw0SYjIxkmImxcJXyPcp6hclrI9ytldplZiax9WtmZr5ZratbMypKpM7fHPRCQcRaYxM0WdBjosrxY2MhGsJkpiVIPuEfRNkOQLYDkPhOlIRkYcpbFbNOk6Gm0nNPwVm9wpu2xbZcZ9QxbCbAbGSU9xsRKY2AqSzj5Rq6eXyIycfKNLTv5UYaEq9jdj4MrYuCzAzV06IxCoDkI0pE0ckFWwwGiGkHQLRUpkzjuKaofJWLaKlMposadVFe4posYVSSI8v8AqFzAt0aj2wfcZunW6NLLth+45fB7tOPN9T3T9zByY7XBv9Q8oypxO7KKyc2Or2KeSLT4NnLjTTKGbHzsaSs7FBrcBodODTFMskIbDdCfI7GTr4DUgkjkgkjC0JhG2X8MCrhW6L+GKpGWr7OLGOHBZxxF40WMaM1wyEaQ1IGK2GxQGlLYhoYlsBIKEJbjoIUluOgjGksYV8y+pq4lUUZmBXJI1Ma2Or+NPa4N8FXUOky0ylqnUWdnlpsTWy5Mxsva2e7XuUG9zh19ZBbIs5s5GdhDQaAQyJFhjQ2LpikGmZ/kPPtEMJ7IFnrRqA45nN7CqVjSLezVwrYzNGrNXCtkZVUWoDoicY9EKSQySBwE6j9Rnk+qO856vUuoP6Hkte71L9jXP1jr6ptAtDGC0awiWhUuRz4Ez5NYA2RZzZDLAkxuNWxUSzgVsnQW8S2RbxoRiXBZgqMKs+CHxVCYD4GdM2C3RZilVCID4uiDIi/0fUtf6Mt0aWLImk0zO1cXOCceVuM0ea4pN7ow8mf7Z31WqnsMT2K0J2kPT2MpVGRdnAphFdAgZHWC2P8AQJyKyvKFlpqwHHcz1QrfCTOWFrgsdgSRHaXFPInFCMORwzpr1L+SCaKGSHbktG/ivtN9PU6WayadX6UzxepxvFqcmNquybX7T1PScndiSb8GN9oMDxdSc0tsqUvv4f7jvxfR33GU0jlGwlCxqgkVbxBccd+BqgkEtjm6JttNDSOtIFzITsX5M5Mdj3ZWgWcaIsVFrHsWsfBUxsswexjYqHp0GpCUw0yeKOg7Y5PYRjdDUyeAdgNkOVC5SFwOlL3E5HZ05iZz25DgKyPcrzfIycrYmbNMwlTUbpmVkjUmaed8mfkXzM6cJJWwaYLVHWaGamEpUJTJ7hKWFOie9lbvryd8SvIcLqz3gufuV3k9wZZNuQ4Vps8iS5KmSdsjJkvZMS5lSItdJgNkOQDZRObAb3ObIW7KSZDdj4LcTBDomegfDku4HsilAt4XwY0NHEyzBlTE+C1Axqj4DkJgNTJ6ZkQgEwkx9NIMibIbLhltAtByBe44A1uOxrdCvI3Fu0R5r/iF/TK5Ivah1ia9inpVc0WtW6xmf8efVR57X7tGfJF/W7zKckdcRVXIirlhfgvTRXnHkpNZmWHNop5IU2amWHJSyw5LlTxTfI3G9xc1TDxumGvhLUeEEiIO0F5OamfhW6L+HwUMbovYHZlr6cXYIsQRXx+CxAmLPgthqFRYxOgBl0gHuybsEWgKKGxFpDYIyoWtOrmjTgqRn6VfMjRjwdv8WKjpcGdq5bMvzexl62VJm3kp1h6yVzoptjtVK8jK7Zy1i6wkAnbDRPANcjIoWg0yeGYg4gJhJiuTYLAbO7iG9jvWhsFvagXIHuuSQVNaWkVJGniWxm6RUkaeLgwq4tQHJCYDkSaSGccxwKuqdQf0PJ6t3qJfU9TrXUH9DymZ3lk/c1yy19LYD4DBkjWAmfAjJyWJFea3NcguiGE0RRYFDkuYFsVMa3L2GOyM9CLWNFmCEY1wWYIxqjYIdFCoj4ozqjIOhvckrsUuBOoy01FPkkLWF98/VAzh8HUWtoy3+8LSKoJvlh6pd+JpcrdCs76K/FzC00mWE7MjS6h0oy2aNHHO1ycevVTKsJ70GuBKe43u2I/RisiQLe5F2H6Nz5Oo7jc60LoRR2yObIF0ByVRR1CLs3SKWd2XjXKjS50jLU0mx32iw9+LDmS/VbTfs+DL0Obszretz0Gsh+ldMyJK322vqtz0cUs3seVqjjmwHIsht+4ty9wXNC3KypkhuRKluKsKLK4cWYMsY2VIMt4jLSotY2PgIhVD4Mxq4amGmAmGkTxRuNjHLYUnSOb2JAnL3FSkRKQqc6XIcCJz3EznsBOe4qU7HITpS9xU5HOQqctjSQFZnyUZ8ss5ZclWTu2bZiSpMCyZ8gN0XAKzu4W2Q5DBrkC50KcwHMfC6a8gqeR+opz9wHIcibRufuC5gN2C2PieicrBbIsixgVhRQC5DiKg2I2L2EobAzoOg9y3h5RUh4LWHlGdDQwvgtw8FPC+C5Dwc+jWI8Bpik9gkzPqjEye4VZ1jlBykTYnuonvKlA2wWwXMhyLiktljCrVlRy3RcwcIy899BpaNfMN1jqBGiW9+xGtfyleCcyr+mBq3eQqssah3lYhnTElSQiaLMkKmhpqlljdlLLFbmlkjsylmW7KiazcsabFxdMsZo8ldKmX/RLmN7DBOJ7DLOez2DsbLuCVFCDpljFKmjLUDWxS2RZgzOwzLsJ2iFyrSYakV1INTsDPTJW7Fp2g47kaI2KGwFRQ6HJkcXtKty8uCnpVsXPB6P8AHnIqF5HSZj6+ez3NbM6izC6jOk9w3S1WJmneRim7InK5v6kWYshLkamKQaYjMTCTFphpiMxMYmJQxMRvMuZDn7iu87us7+KG3Z2LfJYDlUW7D0yuSJ18JsaVbI0sS2Rn6ZbI0MfBzVcWoDUhUBiYjSQyfoC9yoFDqDrG37Hl57zb9Wek6pKsT+h5trc1yyv0NANbDaAfBpCV5iJ8lmZXnyaQwUdQSRyRXTHijckXsSpIrYY72XMa4M9U4sY1wPiJgh0TKmbBD4CYD4kUxN0m34KCbzaj2THazL2Ymr3YOihUO9rdhITQxvsVEZJqm2xTnTEajJSSTFIQ3d98eS/pcvelvuZuHJ4fA5SeGSnH9V80ZeXH6hWNmD3Q26KGHUqdNMtqdqzgs4IY3Ss5StC3NPYlNUIzE7RDRCdcAt7h0Oba9zlJVud5OdNAESaaKmZWmPm2hMnfJU9FVKFwypt+T1fTprJp0nvZ5jJFKVm50fKmqs7vHv1EZ+vPdRg9Nrc2JukpNr6PdFN5L8m39q9O4Z8Opitprsb91x+w87Z25k50ans7uOsT3EqRRHWFEQpDYSJpxZgW8a2KeN7l3DwY6VFmHA6IiA+BnVw2IxC4vYNOiFDboFs5sXJgESkV8k6Qc2VsstnuOQi5z3e4ty9xcp7gudIqQDcxOSewMpiZzvyXIQZu2Ikw5MTJ8mkIE2KbJnITKW5pAPuBcgHIBzHwhOYDnYDkC2PiKJsGwWyGyiE2RZFnWAScRZwASDiLQxMmgxMZEUmMiyKD4FvC90U4FrC6oyoaOF8FuDVFHC+C3CVJHPo1hMKxSkHZkYrIbBbBbGBORDnXkW5C5ToqA/4hPeU3kp8krIaQ+rXdc0rNHT8Ix8U+7Kja0/COf+RffDla2iXytide6H6RVjsq6+XJ0eKcyr+mFldzf1FsmTuTfuCzYkNCpDXwKkgBGRbMpZkXshTzIqIUMy5KjVMu5VyVpLcuFU42PTEY9mORlr6Qovcfje5XQ6BnYOL2GdUXYT2MyEmi3jnsRYa8phxluVVMZCe4uH1cix0HuVsbuizj3Zjo1iK2HY1uhMUWMatog1/Tr5VsWfAjAqihzex6Xi9YXPivqHSZ5zqc6TN7VSpM8v1TJu1fkz3UbrNbtkim6YSdkMjUw0xDkHCQlLCYSFxYxMRjTCTFoNDN5DuJTAslM7uGKb2SLWkVyRTW7svaRboz2TY062RoY1sUdOtkXsfBzVpFiHAwXDgMRuOOYLezKgrJ6tKsT+hgG11iXyUYprn4xv1wDWwYLNICZorT5LMxE1uXABBJW6ISGQjbGpYxLZFqC4E44liCMrTNgh0QIoZHkiqNihqdIXBHZprHicn4RIqlqpvNqVjXFmjFKEEl4Rn9Og55JZWuOC/Jjv8AxIZSpNmfmyXPktaifbBmRky/NY8zpL+PJVbl7DkUlT3TMOGbfkuYc3G47DX2p6efdFt43+wv4dV3xSRSxZVONPdPwQ8U8L78e8fTyjl8ni/XuFZ/xsQk2rYxMzsGri1Te5YWZS4ZxWWF1aUjm0yu8nuT8RBwdPvbYByYvvI716jmR0U26ETbDc0/IDaZcynpGSTLXS9R2ZUm9rK84pi8acMia2pm2JxPfb03WNP+m9JyRirml3w+qPBNuz3/AE3OsmJJ7ujyPX9E9F1Caiqx5W5w/ijt8OuzjS+4zLO7gGzrN0GqY2EyqmNgyaa/iluX8L2MvE90aOF7Iw0uLkWNiytFjYsyWsJhJiEwkxGbYLYPcC5AA5GVczqLHykVNRKoNlwqqOe4tzAcgHL3KkLo5SFORDkLci5CdOQqcjpSEzkXIA5JCXK2TKQps0iaJsW5HNi27HImpbIsiziuEmziLOsAmziDgCTjjgCUw0xaDTJoNQyIlMbFkULEGWcTKkCzjfBnQvYnwWYyKWNliMtkY6hxaUgu8rqQXd7mVgOcwXMW5gSnsLgE5ipzBlMTOZchOlk3OWX3K8p7gd5rmDrV0Mu7LZ6HTrZHnekq52ej064OHz/78Xn42NOqxIzOoSpS3NPHtiX0MbqU6TOvx+sxd+Mi7dkgJk2aklsVJhNi5PkAVkZVy8FiZXyDiaqZEV5ItzQiS3LhFRVMYkCkMiiKEpDYoiMRsUQExVD4NoWkGtmLg4epsbiduyqmWsCsnU5CXcXCLmJFXCuC5jWxy36Z8SxhVtCIotadXJBPql/EqQU3SZ0OAcjpM9HPrC2drZ0meU6lO517npNfOkzyetleV+xlplukNnRlvQDdsG6ewuIObJjJpi7tWSmLhrcHfA5MqY5U0WL2slUMUqDUiv3hRmAeYaaBbrYbHdUDODT4O5VRBWzR0i4KGNbmnpFwZbEauBcF3HwVMK4LmNUcyz48BAoIRuYEnSCYE3sy4VYPV5W0vcy2aHVHeVIoNbm2fjFAMg2gJFwyZrYrzW5amtivPZlwBSHYY27FRVst4YbBTPxosQQvHEfBGNUJIYkRFDUiVJiqKXUcjUI448yZeeyMtJ6nqNcqL/cGfvSrQ0uP4WnjHy1bDkyW62SFZHSbF/ZKOvyVGjInO3yWeo5rnSfBnOTOjGfRHwyU+S3hyu0ZsE29kXtPBt8BqF1q6ebdGlhk9jN08KSs0cLrk56fR5tLHL82N9kv2MR3Z9OqyQdeq3RdUkg1Mz1ma+jnVGOsTfIxalPyhuTS6bK7niV+q2f7BM+l4mv5PLOH1pozvglT+a56lJ7NHfpKe9iJdLzreOeEvqmgJaLVx4UX9JB/4cLlWP0pJ02MhqIvyjKy6bWR3eKT+lMSsmbG/mjJfVD/APKp9vQKSfoFVsydPq7aTZq4ZqaVMy13P040en5HCa3LX2g0X6b0yU4K8mJd8a81yvwKWnTU19T0WnXdhSluq3N/DrtaZj5c2A2Xer6daTqeowLZRm6+j3X7yg2d6LOXgk9xkGITDg9yaIvYnuaGKWyMvFLcv4ZbIw0qL0ZDVIqQl7jkzNawpBKQlS2J7hcM3uAlIBzFzmOQJnP3KeryfI9wsmSvJQ1WS4vcuQrSnMFzK/ffk5zNOEa5gOYtz9xbmVIBSmJnPYiUtxbZchObAbo5sW2VIlzkDZDdnFlU2dZBwEmybBJsAI6wSbEE2dZxwB1hJgnIAamNixKYyLIsCzBliD4KsGWIvYzoWsch0ZFSEqHRkZ2BZUwu9UV1ILuM7Ac5C5T2BchcpC4BSmInImUhM2VIQZSA7tyJyFp219TSB6Po8agn6nodOraMPpMaxI9BpVbR5nkvfI0z8ab+XD9x53qc6i/VnoM7rE/oeY6pPx7nfn4vSipE9wpM7uNCMbFyZFgtgAzYmW4yTFNjSVNCZLcsNWhTQApLcOETqoJcioGqQSYBKJ4DU6JTF2cmHAcnbRf0y2RnY92jV0y2Rn5PhLmJcFuCEY1sWYrY5KZkUXNOt0VIIvaZeS8TtUuJUhWZ0mN4RV1MqTPQvzi2L1KdQe55fPLum37m91XJUWedk7bMfrn1fYWwWSyGMkxdbMKxbCTfkVgOg9x6e1FfG02htkWGKyU2iFRIKYGN7lpw7oWVYLcv4F3QpnXa0k6qKNOjT0i4Kc4VP7y/pFSRjupaOFUW4FbEti1AxWcuCSFwEotiAWwMj+Vje1+gGWD7HsaQq811B3nr0KtbF7V4ZSztpFR45R5TRrKyJaBaoc0LmioCJK0Iktyy0Jmty4Ycatl7FGkivhjb4LmNbE6OGwQ5IGCpDYozqhRQ1IiC2GJEKJ1E1jwZJeUtvqVOlY2lPK+XsmH1Ob7YYly3bLWDGsWnhCt6tlf0TpFXUT7YNlmbMzqOTtxvfwPM7SY2pyd+VsVCDm+A4Y3OVtcs0tNpOG19x03UzE2k6fSt1tsaWHTqKW33j8WFRXAxtRW5za10cRGKiNWRLyUsuoS4ZWeq35J5aOtdZlfIyOa/JjwzNtblmGSx8NqRyX5GKe3JnxycbjVkvyBrimT3lZT9yVP3KkB7khWSEZqmrAc9jlNFcTVPUaaEXcVT9huhyOM1Fuws7TiVITcZpp00yPJ4puIj1mCGyfqrNrA6xpGD0nUQ1MFCU0pJUrNfKs2PDJ4kpzS2V1Zx4zvNdEnp4j7UTUutZmt1ST+tGI2XOpTyy1mV5045G22mt0yjJnp5npjv669xkXuIT3GRY6S3je5dxSryZ2N8FvHIw1DaGOY9SKMJUWIy9zKxUWVI7v8AcSpHOYjNcxGSdI6U6RXyTKkAcuTncztVktcj809uTO1E7fJtmBHeQ5iu4iy+A1zBcgGwWyuEJsW2c2A2PiXNgNnN7g2USTrIOKCbJsElcgEnJNhKLY7Hib8E2yESothKDLccG3AxYaXBP7Pij2P0OcGvBeeFehDxL0F+hxRaa8EFx4bXAuWFrhFfoEobFguDXgJIVI7GyxF7FaBYjwRQamNTFIKJFI5Owu4UmTZFgG5bANnN7ANi4ENipsNsVNjBUmdDeaXucwsCvNFe5XyB6vpkaxL6G7o1c0Y3T1WOK9jc0S+dex5X3bWfFnVusTPJ9Tnc0r8nqNfKsVHj+ozvPR6OT0R3HdwrvOUrNCNcgWwLIctgCWxbZDkA5bjKjBaJTtHMQLZ0TpcnRACRNEpWxijSEC0mwlAOkTYgPDC5I1sEaSM7TK3Zq4VsjDyUlnGth8RUFSQ2JzmdjW5o6dUihjVtGlhVRRr4Z/kqfTW9mZ+rlSZem6TMrWzpM69VVec6tku1ZjNl3qeS8le5QsiOe/RWQDZ1jAiLRFkWAMjKmqG997laxkG6omwLEW2Ngre4nH4LCdE1UefSqRe0r3r1KdfOW9L+ujp01z9MyxqfBb0q2QnKk5lrTR4MNF/a9iT2LeOFicMS7ihsZqg4Y0kthqgq4CjEYo34HIZPw/YXlx/K9i6oC8sPlexpIVYOTAnN7CZ6dNcGvLDbboVLD7Anjz+fSVulRSyQcdmj0uTDzsZup06d7FypsYjQuSLGaDhKmhDVs0hQzCi3BCMMdkW8a2IqoOK2HQQMUOgiKqCitgiUtiMjUMcpvwiTZ0l+kdRrlRdGjP0KnTYNueVrd+S4029x1KvNGRrovJNRRuZIWnSKbwLvbaKzeFVLS6RRSbRoQxqKuiUkkJzahRTSYrbRwzJkUEUM+pSvcRqNVzuUJ5HN87F5x0j8uocm0mKjN3bYCOWxpyQl3HPgtQye5nQlXkfGdeSLDaMcj4sbHJ7lCGUasnuLgXlk9yfie5TWTbk74nuVIFz4nuSplRZAvie4yPnO1RUc6YTyFecxyJaGj1bxZE02qZ7XpmtjqsSt3JLf3PnCyNO0zZ6P1B4cq+bz5J1n+22Nd9PQ/aLokeoYHmwJLUQVp/73sz55kTjJpppp00+Uz65p80NRhU4tNNceh4r7YdHcMv6fpoNqbrLGK4fh/eXi/wBHvPY8qnuFF7i97GRHWKxjZbxsqYy1jdGOjWYMdGRXiw0zMz1LY5zFWc5CMcp7FbLPZjJPYq5HyXkE5JbMoZm7LeR8lHM9zbMCLIsGyLL4BWQ2C2C2PiRNgNnNgjJxxxww4445DCUrDjGzoxtljHC2iNa4HYsd+C7DGkjsWOktixGNHPrYAoV4C7EMUQ1C2R+gR2ex3w/YtrH7BfCoc0FL4fsA8XsXXBLwLcUXKFGeH2EPHXg0pR24K+SBc0FWKpliHADjTCjwFI1BpgIJE0DOIOJpJsFskFkgLFyDYuQAp8jtGr1CEvktdOjedMN3maHq9EqgkbmiW9+xjaVVFfQ29Eqi37HmeP3ttCupOoM8X1Cf+UM9d1SWz38HidbO9TI9LJb+h77CUhCkT3GhHOfuC5+4ruIbAdG5e4LluC2Q2MjYSDfBXhKmOTtCCJBQTIYyKoQGkkS2RZ1iN2x1kWQnbSANDSLg08S2RQ0saSNHGuDl8lI+C2HQFwQ2KMTixhVtGnBUihp1bTNCOyOnwT+15BldRZidRnUXua+olSZ53qmSoP6G2hqvNa2ffne/BXsPI7m37ixRg6zrOYNjArRFgtnWAEtxuPkSmOxioWobINMVFug0yLFRj/6TLelXzJlWMbZewR7VZvq+mkHN3M0NMtkZ0V3ZDV00aSMNCL2FcF7EqRVwrgvY1sTFw2CHRiDjQ+KKhhUNgckLXA9R2ImtiwoPHyLljLjgLlEOBQyY/YpZ8S32NacCnmgt9g4mx5vW4OXRlNVKj0mrx2mYWaHble3JcrOzlHiWyLWNWJwrgt4oWKqFFD4RCxwQ5QRBwuqRU1rbxKC/0mXpRFrB8XOk+EIWu0mHs08VVN7sc4VyW3jUI/RCG7d+AIqUVGDbKGaajbY7WalRTVmFq9Xs9ysy0j9Rqkk0mZefVNtpO2Vs2pc20vxFJ3uzoz4+fSMcnJ22SgUg0i6SUiTkjiQlOg1IWTdCsCwphrJ7lVSJU/cXAuLJ7hKfuUlP3CWT3HwLqyE/F9ymshPex8C08linP3Fd4LmORJrkHhzOE1v5KzkD30x8OXj3/wBnOoJ/yeSVJqt/B6HLCOROE0mmqa9T5fodbLDNNNqj1+g+0EJQUMrTpbO90ZWcby9J6r9lIZJSzaOVNu3F8GVj+yutb+aUIr1pv9h7LDr8U0msir0boZPW4Irea+4f/pP7P8yvI4/stqYQlPNnhCMU3sm2zKS7ZNXdNo9T1frMJ6aeDE1vs3e55ZO3Znq9TqSHR4DQuLGIis02c2cQ2IwyezK2R8j5vZlXIzSEr5Hsyjle5cyspZHua5AbBbIbIbNAlshsizrGTjjjgJxxBww4OKsFK2PxxJ1eAeOBawwQqEdy5iVHNvRmwikhyQMEMey2Oe0OSoZBWxCdui1hjdB0H48dpByx7cBY0lQyk0KUKGSFCWi9kgV3Dc1miV2hM4F74doXPFsXKbNlHchbFrJioQ40y5SrkGgEEhEJEkIkRIbBYTBkIBYuXkNgSECnyXulxvLfuUXyafR43O/cny3mKHp9OqSNvSKsTZj6dbI2tOqwo8/wT/NvPrK6tKkzw2pneebu9z2nWJVGR4bLK8sn7npY+o1fYkwrFJhWakOzrAs6wCWyGyLIAkp0x8XsVw4ugHVlOw06QnGxjZJibIsGzmxATYWLeaQpsfpVc7FfhNbTqki/jRTwLZF3Gtji3TPih0VuhUUOgt0ZnF3TIucIRp1sPk6R2+L1lpFPVSqLPL9YyUmrPR62dJnkerT7p17jtRtkvdnUHRDQ5WRbQDVDWgXGxgo5DHAGhhyHwQpIdBcCB8V8oaREF8oaRNNnY4bj+I0RGG/AxQbKtaC08Lkma+COyKWmx00aWGPBjVSLeGOyLuNFbEtkXMaCKOgh8ULghyRpFJS2Ikg0iGi+AloVJFhoXNUgJUmtipmWzLmQp5mkmIqzNQtmYuqglK0bGpnVmRqJWxxnUYkti5hRTxuki1CdBR1ci6Di7YnHbLEFRnTGkmFjqDvyQuAGm37EkfKbl9CtnnUXQ6nVFbUNKDb9Bw3nOpal45NN7mFlzSyS3exc6tJzztv7jOR3ePMk6VGg0gEhkUXUmJBJAxTGJGdDqJo6jqJCGgWG0AxgLZHcQ2C2VIB9xKmJbOsrgWFMLvKykT3C4Sw5nd4juOUg4R7kDYCbOsD4YpNOx8NROL5KtnWLip6a2LqWWCqM3X1GvqOWapzb+8xosdBmdzF/qr3xpTe7Y2DKuN8FqBnU1YgGhUBqIAgWwvADewEXN7FbK6ssTezKuVlwlXIynke5ayPko5HuzfIQ2C2Q2dZoE2dZB1gSbOsg6wCbIs44AbjVsswRXxFvGtjLdM7GixAREfBHLoLMEMa2AxoelaOe32ZMY07LeHwKUNxuNUL9A9OhkWJQSdDlI1pMD4dsOO41R2LlLiu8VKxcoFxx2EziXNGo5IL0KuSFmhkgVskKNJQpuNEJUOlEW0WmoRJCJAkMFhMGRIAwJBsXLgQLZsdGjwZDNvo8dkzPz/6CfXo9OuDZxbYV9DI063RrrbEvocngn+TfP157rUqhN34PEt22/c9f12dYsjPHPk9DDLX0SYVgIlM1IVnWRZ1gE2cQShBxyOOAjYOhndsJiFYqY+47uAsFsAZ3e5d0St2ZydtI1tDDZbGe/UJq4Vsi5jWyK2FbItwVI4dfVmxQ/ErkhMUWcC3FPqov4VSDyOkzsapAZnSf0O7M5lpPjL106T+h5PXS78r9j0vUclRZ5fM+7K2Rb7ZaJSOcQ0vYLtDqSHAjsLHad2Ico4R2APGW+0FwK6SooUxsE0NcF6HKFB0hY+BqQEVQxLYQBHHvwMji9iysVPgNY/YVrYOHHTL+GO6E44UuC5ijwRVLGNFjGtxONFjGtxwz4IckLgNRpDEiGSuAWywGQqb2GNiZvYAr5XRl6rJ2+TQzPZmNrW5WkJFUs8+663M/MqtsvbRTvkoaqVscRQQnTLWFt1ZRxptpGnp4LZi18C5iWyvYcmvHgUltQSXhGVM2Lt0PhBPkRBNVsPi2lYjFKCrYzNfFrG6NFzpGfrZppplQPG9RVzZnpGt1OCttIy63O7Hwq5IbBAxQ2CHSEkGkckEkZ0nUdQaR1CBbQDQ1oXJBASxbGyQqRpAW2dZDYNlgdhWKsJPYANMNMUmFYgOyUwEyUwA7OsGzrEo2L3HwZWi9x+NkaC3iZbhwVMXJbhwYUHRGIVEZEkJsFsJgNgReR7FTK+SzkexUyvY0ySrlfJRyP5i5lezKU3uzfIDZKYIRYcSDZIiSdZBwBJy5IOTGD8bLeN7FPG9y1BmGzWYblnGivjexahwcuzPgOiJixqZz6M1Uw0hSYxMyoMT2OshMJLYcopmN7lqLTRUiqHQkawj6TQqcAlJnN2ioFecLEZMe3BdaFTiXKTMyQa3K8lTNHLDZlHLGm2aykUcccNKGDIJgsABgS4DYEgBT5PQ9JVQR59btHpOlKsaMf5H+p5+t3TLdGtN1if0MvSq5L6mnm2xP6HP/AB42jyfX51imeTo9P19/yTPNHfj4w19QSccaB1nHEpAHJBJWEo2EoiAEjqG9p3aLoLSJoOgWLoQwZBMFjCcS7ppe5vaKFJGNpI92VM39LCkjDzUT6vYlsizERjVFhI4v7WbHkt6dWypBbov6ZcF4napcSpFbUSpMsPZFHVSpM7L8XfjE6pkqLVmClbNPqeS20mZyXkyZOSColKjhhFHUvQlEpWOBCR3YGkEojSV2Hdg9Rvwd2ACVAlR2HKJPYPpLyx78BLHvwPUA+z2JroKhClwOxqmEobBY1vwSZsFSHY+RaVIbjRUCxENARDRpDFewDZLewDZQQ2IyPYZJiJvYCVc72ZlZt22aWZmZqHSYIrO1E0m9zPyO3bLedtypbgQ0s57tOg6z72l4cdtUjTw46XAen0lJNqh8oqKpIzt6qQtypUhmJb7im0nbZyzK6TFVLdpukMTVFWM/NhLJ7iA8rpGPq81yas0c2RdjdmDq8m7LzOhQ17tNmQ+WX9VktNFF8nZj4Qo8j4IRHwWIBojEg0gYoYuDOkijqJo5oABi5DWKkOGVIVLgbJCpI0gIkCMktwEi4bkiSUiaAkIJEIkQScjjgNNnWQdYA2DLGPwVYvcsY2Z6NdxFqD2KmFlqBhQemGmLXAaZKUtgNhN7APgIReR7Mp5mWp8Mq5eWXkKeZ7MpT/WLuXgpT5OjIQcccUBHAnAE2dZJwEiyTiEANgy1B2U4umPhIy1AvY5FuEtihjkWYT4OXcNci9h0WVYSsdFnPqGemGmJUg0zOwzkw1IQpEqZPAsqQcZleM78jYs0hLEZJoNbiIMamXIBNASQb4BZUBOSNooZocmjJFbNBMuEzWqZFDskab2FVRaaBgsYwGhkBi5DGLkABH9dfU9P0xVjR5nGryr6nqOnKsaOf+T/AKnn629IrmjQ1LrEylol86Leqf8AJMy/j/G0eM6+/lr3PP0b3XnbS9zDaO/Hxz36E4KiK3LDkhkINs6EW2PUe1IVoCopBUFRxKg0cSQ+BAL4BYTBYwBsBsJgPdpe4yX+nwtpvyb+njsjJ0GOkjawqkjj8t9nlZxrYfFCsa2HRRguGQVtGlp40ihhVtGliVRNfHPap9FkdIytbOk9zRzSpGJr8lRbOi1WmDrZ92Vq+BEeDsku7I2SuDNkJsghsixgcQ4oCIxABpBJEIJDJKRNI6MXJ0k2/YfDTTfNIVvPoISCUG+E39C9j0a2b3LUNPGPhEXy5g/KEhkYkJDYLc1boaqPAONbjci2oCC+Ymj+zq2Q3GhaXAyA4DYhIFMlM0hpbpCmw5vYRJ7FEichGR7ByexXyS2Aqr5pcmZqHyXc0+TN1E+UFZ0nFjUp21ZqYMKmltsZuGVNGrpsiSRPTzxdhpl2UlRQ1eGWO2tzSx5kluxOoaybeCbYvU/48xnzzc3FppEY8teTbyaTHkVNJlLN0vl4217BLGZCzbck/G9ypm0+owXcW0vKKstS1s7TLmen1e1GqqNWYupz23uTn1F3uZ2XJbe5rjBoyztiU9zm7ORvJwjI8jsfgTEdAnRHRGIXEYiCF4IaCOYjLaFtDWC0MENCpIfIVMuGrzW4NBTALNNHIk4CdRxxwBxx1ggE2dZ1kWMzIvcs4m7KsHuWMb3I0F3E7LUGVMT2LMHRhSWU9gkxUWGmRSE2C2SQwhFT4KmXllvI9inle5cCpmfJTlyy3mexUk92b5CDiCSg4444A4444CcECcAEmNgxSJTJoXMcixCZQhIfCZjrIaGOY+M7M+E6HQyHPrBrymGplRTvyGpmVyaz3E99CFM7vJ/IWYZNyxCdmcp0x+LJ7h+Q0oPYcinjndFqD2KkBiex0jkE0UCmhOSNostAuNrgcDLyw3ZXcaZo5se9pFOcdy00imA0NaBaBJTQpoe0LaH0Awr+VR6jQL5F9DzmCP8AKo9LoFUEc38m+lZ+tvRL50P1jrExOiVy+4brX/JE+D41nx4rrbvIl7mS1bNTrL/lkjMaO3HxhfoaJSJoOEVZZDxxpWyW9zm6VIgRus6yDhGmzmyCGwDmA2E2C2ABJg413ZEjpMbpI92W/cd9Qq2tFCkjWxrgoaWFJGjjR5+72qh8OBsULih0ERFRY06tmhHaKKmmjvZbk6R0eOemmVXUzpPc871TLUGrNnVzpPc8t1TMnLtvk0qN1TTt2GnSExmF3bC4gbZ1i3MhT3DhrEWMTK8XwPwwnkdQV+/gXwGJlzT6aWSnJUvQbpNEo03u/VmljxJKkjDfmk+HwjHp1FUlS9ixDGl4GdoSVI5dbtOQKSXgKjjrI6ZaQ2C3FxQ6CPXaIyIXFVIbMWv1iak5PgOIpPehqYRRiJsBM6zSBGSVIS5E5Zb8iZSGSJy2KuWdIPJLYqZph1FqvnnyZmebcqLOfJdpclKabdk9Z326OTtZbw6pKjOnsxfxHHyH5tOXj0C1ardhLWQe1o869Q0uQP0p3yyL46f6ephnT82PjJM8ti1ji07f4mhp+oJ0myLnWTlbbxQmqaRna3o+HOm1Gn6rYs4dXGSW9/eW4TUlswnksPkrwXUuj6nStygnkgvRboxJXbTTTXKZ9YnhhkTTS3PP9W+zuLUJzxrsyeq8nX4/PPlTZY8KwkWNboM+jyOOWDrw0tiukdMsvwS9MiPgIjyPgTQahiFRGIikYcQmSIwMFhMGQ4ZUhUxsnsJmXARPkgmXJBZuRxy4OsYcQ2dZAE446yANzZx1HDAo3ZYhyIjyPhyRoLmJ7FmDsrYixAwpHxYxMVEYuCKQmQ2TYDYEDI9inlfJayPYqZfJcCpmZUlyy1lZUlyzfIccQuCSjScQSInHHHAHHHHAQiUCggAk9xkZUJQaZFgWIz2GxnRUTGKXuZ3JLiyDFkKamGpmdwfVxZAu8prIEshP4NZ79xmPJvyUviDMUraFcBtaeV0X8b2MvTT2Ro45XRnzhrKCAjug0gDmjqJZFiCvlhaKWWFNmlJJoq5oJlSis6Spi2WMsaK7GktgsNgMZGYF/Ko9Jol8i+h5zT75Uel0a+RHJ/Jqs/WzoVu37E691jYWiSSf0A17+Rr2K8H+rX+niOsO9Ql7Gei91Z3qvuKK5OzPxzpSGLZAIKywJsghs6wNJxFnWHA6yGziLEHNgNhMBjh8BJl3p8Ladc7lCVtpGz07HSRHkvMp/tr6eNJF3GivhVJFqCPP0syCHwQmJYxq2giou6dUkxmZ0mdhVRFaiVJnVmcjSeoytdkpPc8dr83fqHvstj0nVMqhCTvwePyz7srfqypO1hu+z4TGqVopwk7GxlsPhQ5uzopykkk23wkO0miy6lppOMPV+fob2j6bDCk0rflvlmW/LnClDR9OnOnktL0NvT6WGNJJJV6IsY8SiuKQ1RXg4d+a6VwEIJDEiUqJRiYWqIbJkA7AOs6yDgCYjsa2ERH4+D2FomKvcZkYi9yaR6YaYlMYmEM2yG9gLIb2Lh0rLK2JlLYnJNJttlLPqVFNWPqLRZsqinuZubM5NpEZMzm+dhTaSJ91AWt22+RU2kHKaS5K2Sd8F5yVLyNFTK3ZZnwVchrIkptsG6CYLKNHe0xkNQ0+REgGxfmU2rh1rg1v+01tJ1JNpNnknNryHi1Ti1uY78HfhyvoWDVRmluWlUl6nitH1FxaVm/o+oRnSs5NZuGkq1rNBi1EGpQTT9UeR6p9nZ4W56ZNrntZ7jHkjNbNEzxRmmmrL8fmuU3EvuPlTjKEnGSaa5TQyPB7fqnQcOqTko9s/DXJ5LV6DPosjjli68SS2Z3Z8k1Ed59KQxC0HEZmJnMFBAAsCQbQLQAqXAiY+SEzLhkS5IJktwWizSCccMJsizrOqxkjkmiapHNgaGRZzIAGR5HwK8SxAnQXMXBZhwVsXBZgc9I2IcQIhoikmyGS2AwBeR7FTL5LWR2irkfJcJTyvkqvdss5Xsyq3udGTScccMOJIJAnHHHCDjkccBJRIKCQBJKIJQgJMJMFBIkhphJgRJFwC7iVIE4XAYpe47FLcrIfi5QrPQaumlSRpYpcGRhbVGhhnwcuopp43sh64KeGV0Wk9iA6TBOe7IewG5sVkVoNsFuxhSzQKc1TNLIrTKOaNFQldi2G9hcnsUk3Su8qPT6NfIjy+kd5V9T1Wj/VRxfylZbWjXyv6CeouoP6D9J+oyv1L9Rl+D/Vr/Tw/VHerZURa6k71ciomdufjnEjrIshsoCsiwWyLGY7OtAdyBlPakPhwbn4RHcKs67Hwzu4BsFM5uw4BY1eRfU3tDGkjE06udnoNEqijn83wue2liWyLMUJxLZD4rY4aocUWsCtorxRc00dx5+nF1KolHVzpMuzdRMnXTpP2OtpfUec63mrHJJ7s8wmzW63m7snZfkzdPhyajLHFii5SfhePcrPqdc190WHHPJNQxxcpPhI9D07o6VTzrul6eEXOl9LhpcabVzfLa5NiGNRWyOLzfyPfMrkKw6eMEkktvYsKCRKVMOtrOK21YFsSmQzooQGdwddENgEN7Arc5s5eBQOaBewbewtjCYofDZMTHkclsz2Vl5HyITtsbkfIlbszpHJhJi09iHkUfJUM263F5syhF7lXNqkk6exl6nWubcYsfUXR+p1e7SZQnNzdt7AW3u2C5pLZjk6gbkkhc8iS5FTyVshdtvcuQCc22CccaRNLnwVspanwVc3JQJYLCkBIZgkBINgSGZOR7CW6G5GKZcODhmcHyaWk1zi1uZDIU3F7Mnfjmob3Wg6ldJs39PqI5Et1Z820mrcWtz0vT9e9k2ed5PDc30vr10UpIr63puLVYnCcE016C9NqlNLfc0MWRNclYosl+vn/VOh5tFNzxJzxenlGUtj6rn08c0Gmk7PG9c6FLG3m08fdpeTqzv/AKys/P8A/HnkwgUmm00014YS3KHUNAtBsFgCpLYRNFpoRkWzKlCs1uC0Ma3BaNFFtUQE0C0UEUEtiDrGEtkMg4DcR5JIXIwZAsY1uivDks4zPQW8fBYgV8fA+BhSOiGgIhoik58AthPgBgCsnBUystZHsVMrNMhVy+Sq+WWMr5K7e5vPgcuCSESUHEkEiJxxxwg5nHWcBOCQISAJRKIRKEBIJcAoJCAkSRElCJJxxIg5IsYUJirZYxqqJ0FrHsXcMnaKWNXRZg6Oen1pYZU0yzGZnY8niyxCd+TI11NPc5ioT2Du0AC3uQ2c3YLZUCJK0VM0bRabE5FaGGbkVMRN0W80aso5tmaQqsaJ3mX1PV6T9VHkunu8q+p63R/qo4P5f08NvSKoMrdS/Uf0LWk/UKvUv1H9DTw/6tb8eF6l/ncirexY6k/8rmVLO3PxzisFyBbIbL4BWQ5ANguVIrgHKfoBYDluSmVxcHbZKBvYJAoSJIRIhxZ0kW5Weg0iqKMTRR3X1N/TRpI4/NSXsaHxQnGtixBHGDIIv6eNIp41ukX8bUIW2kl5ZeJ7VPqczpMw9fNtOi5repYcaaTTfueb1vUZZpOMHsb2jepPTK1GH42qdLvbdJHouk9OWnxptLve7aX7BHStFbWbIvmfFrhHoIQSVJHJ5vNefmIzP7DGFLgKqDpHVaOKrCgm6VA1RFN7jDuWGkAtgkxBL2AbsN8Cm9xBz3J4IXKOY58DnwA2E+AWOB//2Q==\"]}" http://localhost:8866/predict/ocr_system \ No newline at end of file diff --git a/ppocr/data/det/dataset_traversal.py b/ppocr/data/det/dataset_traversal.py index c4ad5ced4f0240fa776afce8b7bf3e34310bd186..bd055c82be3bd5c762ccaf7a0134f89fbe4fe290 100644 --- a/ppocr/data/det/dataset_traversal.py +++ b/ppocr/data/det/dataset_traversal.py @@ -31,22 +31,27 @@ class TrainReader(object): def __init__(self, params): self.num_workers = params['num_workers'] self.label_file_path = params['label_file_path'] + print(self.label_file_path) + self.use_mul_data = False + if isinstance(self.label_file_path, list): + self.use_mul_data = True + self.data_ratio_list = params['data_ratio_list'] self.batch_size = params['train_batch_size_per_card'] assert 'process_function' in params,\ "absence process_function in Reader" self.process = create_module(params['process_function'])(params) def __call__(self, process_id): - with open(self.label_file_path, "rb") as fin: - label_infor_list = fin.readlines() - img_num = len(label_infor_list) - img_id_list = list(range(img_num)) - if sys.platform == "win32" and self.num_workers != 1: - print("multiprocess is not fully compatible with Windows." - "num_workers will be 1.") - self.num_workers = 1 def sample_iter_reader(): + with open(self.label_file_path, "rb") as fin: + label_infor_list = fin.readlines() + img_num = len(label_infor_list) + img_id_list = list(range(img_num)) random.shuffle(img_id_list) + if sys.platform == "win32" and self.num_workers != 1: + print("multiprocess is not fully compatible with Windows." + "num_workers will be 1.") + self.num_workers = 1 for img_id in range(process_id, img_num, self.num_workers): label_infor = label_infor_list[img_id_list[img_id]] outs = self.process(label_infor) @@ -54,13 +59,64 @@ class TrainReader(object): continue yield outs + def sample_iter_reader_mul(): + batch_size = 1000 + data_source_list = self.label_file_path + batch_size_list = list(map(int, [max(1.0, batch_size * x) for x in self.data_ratio_list])) + print(self.data_ratio_list, batch_size_list) + + data_filename_list, data_size_list, fetch_record_list = [], [], [] + for data_source in data_source_list: + image_files = open(data_source, "rb").readlines() + random.shuffle(image_files) + data_filename_list.append(image_files) + data_size_list.append(len(image_files)) + fetch_record_list.append(0) + + image_batch = [] + # get a batch of img_fns and poly_fns + for i in range(0, len(batch_size_list)): + bs = batch_size_list[i] + ds = data_size_list[i] + image_names = data_filename_list[i] + fetch_record = fetch_record_list[i] + data_path = data_source_list[i] + for j in range(fetch_record, fetch_record + bs): + index = j % ds + image_batch.append(image_names[index]) + + if (fetch_record + bs) > ds: + fetch_record_list[i] = 0 + random.shuffle(data_filename_list[i]) + else: + fetch_record_list[i] = fetch_record + bs + + if sys.platform == "win32": + print("multiprocess is not fully compatible with Windows." + "num_workers will be 1.") + self.num_workers = 1 + + for label_infor in image_batch: + outs = self.process(label_infor) + if outs is None: + continue + yield outs + def batch_iter_reader(): batch_outs = [] - for outs in sample_iter_reader(): - batch_outs.append(outs) - if len(batch_outs) == self.batch_size: - yield batch_outs - batch_outs = [] + if self.use_mul_data: + print("Sample date from multiple datasets!") + for outs in sample_iter_reader_mul(): + batch_outs.append(outs) + if len(batch_outs) == self.batch_size: + yield batch_outs + batch_outs = [] + else: + for outs in sample_iter_reader(): + batch_outs.append(outs) + if len(batch_outs) == self.batch_size: + yield batch_outs + batch_outs = [] return batch_iter_reader diff --git a/ppocr/data/det/sast_process.py b/ppocr/data/det/sast_process.py new file mode 100644 index 0000000000000000000000000000000000000000..74a848465f4cedb4a7007f61adc509d80885922c --- /dev/null +++ b/ppocr/data/det/sast_process.py @@ -0,0 +1,781 @@ +#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +import math +import cv2 +import numpy as np +import json + + +class SASTProcessTrain(object): + """ + SAST process function for training + """ + def __init__(self, params): + self.img_set_dir = params['img_set_dir'] + self.min_crop_side_ratio = params['min_crop_side_ratio'] + self.min_crop_size = params['min_crop_size'] + image_shape = params['image_shape'] + self.input_size = image_shape[1] + self.min_text_size = params['min_text_size'] + self.max_text_size = params['max_text_size'] + + def convert_label_infor(self, label_infor): + label_infor = label_infor.decode() + label_infor = label_infor.encode('utf-8').decode('utf-8-sig') + substr = label_infor.strip("\n").split("\t") + img_path = self.img_set_dir + substr[0] + label = json.loads(substr[1]) + nBox = len(label) + wordBBs, txts, txt_tags = [], [], [] + for bno in range(0, nBox): + wordBB = label[bno]['points'] + txt = label[bno]['transcription'] + wordBBs.append(wordBB) + txts.append(txt) + if txt == '###': + txt_tags.append(True) + else: + txt_tags.append(False) + wordBBs = np.array(wordBBs, dtype=np.float32) + txt_tags = np.array(txt_tags, dtype=np.bool) + return img_path, wordBBs, txt_tags, txts + + def quad_area(self, poly): + """ + compute area of a polygon + :param poly: + :return: + """ + edge = [ + (poly[1][0] - poly[0][0]) * (poly[1][1] + poly[0][1]), + (poly[2][0] - poly[1][0]) * (poly[2][1] + poly[1][1]), + (poly[3][0] - poly[2][0]) * (poly[3][1] + poly[2][1]), + (poly[0][0] - poly[3][0]) * (poly[0][1] + poly[3][1]) + ] + return np.sum(edge) / 2. + + def gen_quad_from_poly(self, poly): + """ + Generate min area quad from poly. + """ + point_num = poly.shape[0] + min_area_quad = np.zeros((4, 2), dtype=np.float32) + if True: + rect = cv2.minAreaRect(poly.astype(np.int32)) # (center (x,y), (width, height), angle of rotation) + center_point = rect[0] + box = np.array(cv2.boxPoints(rect)) + + first_point_idx = 0 + min_dist = 1e4 + for i in range(4): + dist = np.linalg.norm(box[(i + 0) % 4] - poly[0]) + \ + np.linalg.norm(box[(i + 1) % 4] - poly[point_num // 2 - 1]) + \ + np.linalg.norm(box[(i + 2) % 4] - poly[point_num // 2]) + \ + np.linalg.norm(box[(i + 3) % 4] - poly[-1]) + if dist < min_dist: + min_dist = dist + first_point_idx = i + for i in range(4): + min_area_quad[i] = box[(first_point_idx + i) % 4] + + return min_area_quad + + def check_and_validate_polys(self, polys, tags, xxx_todo_changeme): + """ + check so that the text poly is in the same direction, + and also filter some invalid polygons + :param polys: + :param tags: + :return: + """ + (h, w) = xxx_todo_changeme + if polys.shape[0] == 0: + return polys, np.array([]), np.array([]) + polys[:, :, 0] = np.clip(polys[:, :, 0], 0, w - 1) + polys[:, :, 1] = np.clip(polys[:, :, 1], 0, h - 1) + + validated_polys = [] + validated_tags = [] + hv_tags = [] + for poly, tag in zip(polys, tags): + quad = self.gen_quad_from_poly(poly) + p_area = self.quad_area(quad) + if abs(p_area) < 1: + print('invalid poly') + continue + if p_area > 0: + if tag == False: + print('poly in wrong direction') + tag = True # reversed cases should be ignore + poly = poly[(0, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1), :] + quad = quad[(0, 3, 2, 1), :] + + len_w = np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[3] - quad[2]) + len_h = np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2]) + hv_tag = 1 + + if len_w * 2.0 < len_h: + hv_tag = 0 + + validated_polys.append(poly) + validated_tags.append(tag) + hv_tags.append(hv_tag) + return np.array(validated_polys), np.array(validated_tags), np.array(hv_tags) + + def crop_area(self, im, polys, tags, hv_tags, txts, crop_background=False, max_tries=25): + """ + make random crop from the input image + :param im: + :param polys: + :param tags: + :param crop_background: + :param max_tries: 50 -> 25 + :return: + """ + h, w, _ = im.shape + pad_h = h // 10 + pad_w = w // 10 + h_array = np.zeros((h + pad_h * 2), dtype=np.int32) + w_array = np.zeros((w + pad_w * 2), dtype=np.int32) + for poly in polys: + poly = np.round(poly, decimals=0).astype(np.int32) + minx = np.min(poly[:, 0]) + maxx = np.max(poly[:, 0]) + w_array[minx + pad_w: maxx + pad_w] = 1 + miny = np.min(poly[:, 1]) + maxy = np.max(poly[:, 1]) + h_array[miny + pad_h: maxy + pad_h] = 1 + # ensure the cropped area not across a text + h_axis = np.where(h_array == 0)[0] + w_axis = np.where(w_array == 0)[0] + if len(h_axis) == 0 or len(w_axis) == 0: + return im, polys, tags, hv_tags, txts + for i in range(max_tries): + xx = np.random.choice(w_axis, size=2) + xmin = np.min(xx) - pad_w + xmax = np.max(xx) - pad_w + xmin = np.clip(xmin, 0, w - 1) + xmax = np.clip(xmax, 0, w - 1) + yy = np.random.choice(h_axis, size=2) + ymin = np.min(yy) - pad_h + ymax = np.max(yy) - pad_h + ymin = np.clip(ymin, 0, h - 1) + ymax = np.clip(ymax, 0, h - 1) + # if xmax - xmin < ARGS.min_crop_side_ratio * w or \ + # ymax - ymin < ARGS.min_crop_side_ratio * h: + if xmax - xmin < self.min_crop_size or \ + ymax - ymin < self.min_crop_size: + # area too small + continue + if polys.shape[0] != 0: + poly_axis_in_area = (polys[:, :, 0] >= xmin) & (polys[:, :, 0] <= xmax) \ + & (polys[:, :, 1] >= ymin) & (polys[:, :, 1] <= ymax) + selected_polys = np.where(np.sum(poly_axis_in_area, axis=1) == 4)[0] + else: + selected_polys = [] + if len(selected_polys) == 0: + # no text in this area + if crop_background: + txts_tmp = [] + for selected_poly in selected_polys: + txts_tmp.append(txts[selected_poly]) + txts = txts_tmp + return im[ymin : ymax + 1, xmin : xmax + 1, :], \ + polys[selected_polys], tags[selected_polys], hv_tags[selected_polys], txts + else: + continue + im = im[ymin: ymax + 1, xmin: xmax + 1, :] + polys = polys[selected_polys] + tags = tags[selected_polys] + hv_tags = hv_tags[selected_polys] + txts_tmp = [] + for selected_poly in selected_polys: + txts_tmp.append(txts[selected_poly]) + txts = txts_tmp + polys[:, :, 0] -= xmin + polys[:, :, 1] -= ymin + return im, polys, tags, hv_tags, txts + + return im, polys, tags, hv_tags, txts + + def generate_direction_map(self, poly_quads, direction_map): + """ + """ + width_list = [] + height_list = [] + for quad in poly_quads: + quad_w = (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3])) / 2.0 + quad_h = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[2] - quad[1])) / 2.0 + width_list.append(quad_w) + height_list.append(quad_h) + norm_width = max(sum(width_list) / (len(width_list) + 1e-6), 1.0) + average_height = max(sum(height_list) / (len(height_list) + 1e-6), 1.0) + + for quad in poly_quads: + direct_vector_full = ((quad[1] + quad[2]) - (quad[0] + quad[3])) / 2.0 + direct_vector = direct_vector_full / (np.linalg.norm(direct_vector_full) + 1e-6) * norm_width + direction_label = tuple(map(float, [direct_vector[0], direct_vector[1], 1.0 / (average_height + 1e-6)])) + cv2.fillPoly(direction_map, quad.round().astype(np.int32)[np.newaxis, :, :], direction_label) + return direction_map + + def calculate_average_height(self, poly_quads): + """ + """ + height_list = [] + for quad in poly_quads: + quad_h = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[2] - quad[1])) / 2.0 + height_list.append(quad_h) + average_height = max(sum(height_list) / len(height_list), 1.0) + return average_height + + def generate_tcl_label(self, hw, polys, tags, ds_ratio, + tcl_ratio=0.3, shrink_ratio_of_width=0.15): + """ + Generate polygon. + """ + h, w = hw + h, w = int(h * ds_ratio), int(w * ds_ratio) + polys = polys * ds_ratio + + score_map = np.zeros((h, w,), dtype=np.float32) + tbo_map = np.zeros((h, w, 5), dtype=np.float32) + training_mask = np.ones((h, w,), dtype=np.float32) + direction_map = np.ones((h, w, 3)) * np.array([0, 0, 1]).reshape([1, 1, 3]).astype(np.float32) + + for poly_idx, poly_tag in enumerate(zip(polys, tags)): + poly = poly_tag[0] + tag = poly_tag[1] + + # generate min_area_quad + min_area_quad, center_point = self.gen_min_area_quad_from_poly(poly) + min_area_quad_h = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[3]) + + np.linalg.norm(min_area_quad[1] - min_area_quad[2])) + min_area_quad_w = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[1]) + + np.linalg.norm(min_area_quad[2] - min_area_quad[3])) + + if min(min_area_quad_h, min_area_quad_w) < self.min_text_size * ds_ratio \ + or min(min_area_quad_h, min_area_quad_w) > self.max_text_size * ds_ratio: + continue + + if tag: + # continue + cv2.fillPoly(training_mask, poly.astype(np.int32)[np.newaxis, :, :], 0.15) + else: + tcl_poly = self.poly2tcl(poly, tcl_ratio) + tcl_quads = self.poly2quads(tcl_poly) + poly_quads = self.poly2quads(poly) + # stcl map + stcl_quads, quad_index = self.shrink_poly_along_width(tcl_quads, shrink_ratio_of_width=shrink_ratio_of_width, + expand_height_ratio=1.0 / tcl_ratio) + # generate tcl map + cv2.fillPoly(score_map, np.round(stcl_quads).astype(np.int32), 1.0) + + # generate tbo map + for idx, quad in enumerate(stcl_quads): + quad_mask = np.zeros((h, w), dtype=np.float32) + quad_mask = cv2.fillPoly(quad_mask, np.round(quad[np.newaxis, :, :]).astype(np.int32), 1.0) + tbo_map = self.gen_quad_tbo(poly_quads[quad_index[idx]], quad_mask, tbo_map) + return score_map, tbo_map, training_mask + + def generate_tvo_and_tco(self, hw, polys, tags, tcl_ratio=0.3, ds_ratio=0.25): + """ + Generate tcl map, tvo map and tbo map. + """ + h, w = hw + h, w = int(h * ds_ratio), int(w * ds_ratio) + polys = polys * ds_ratio + poly_mask = np.zeros((h, w), dtype=np.float32) + + tvo_map = np.ones((9, h, w), dtype=np.float32) + tvo_map[0:-1:2] = np.tile(np.arange(0, w), (h, 1)) + tvo_map[1:-1:2] = np.tile(np.arange(0, w), (h, 1)).T + poly_tv_xy_map = np.zeros((8, h, w), dtype=np.float32) + + # tco map + tco_map = np.ones((3, h, w), dtype=np.float32) + tco_map[0] = np.tile(np.arange(0, w), (h, 1)) + tco_map[1] = np.tile(np.arange(0, w), (h, 1)).T + poly_tc_xy_map = np.zeros((2, h, w), dtype=np.float32) + + poly_short_edge_map = np.ones((h, w), dtype=np.float32) + + for poly, poly_tag in zip(polys, tags): + + if poly_tag == True: + continue + + # adjust point order for vertical poly + poly = self.adjust_point(poly) + + # generate min_area_quad + min_area_quad, center_point = self.gen_min_area_quad_from_poly(poly) + min_area_quad_h = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[3]) + + np.linalg.norm(min_area_quad[1] - min_area_quad[2])) + min_area_quad_w = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[1]) + + np.linalg.norm(min_area_quad[2] - min_area_quad[3])) + + # generate tcl map and text, 128 * 128 + tcl_poly = self.poly2tcl(poly, tcl_ratio) + + # generate poly_tv_xy_map + for idx in range(4): + cv2.fillPoly(poly_tv_xy_map[2 * idx], + np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), + float(min(max(min_area_quad[idx, 0], 0), w))) + cv2.fillPoly(poly_tv_xy_map[2 * idx + 1], + np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), + float(min(max(min_area_quad[idx, 1], 0), h))) + + # generate poly_tc_xy_map + for idx in range(2): + cv2.fillPoly(poly_tc_xy_map[idx], + np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), float(center_point[idx])) + + # generate poly_short_edge_map + cv2.fillPoly(poly_short_edge_map, + np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), + float(max(min(min_area_quad_h, min_area_quad_w), 1.0))) + + # generate poly_mask and training_mask + cv2.fillPoly(poly_mask, np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), 1) + + tvo_map *= poly_mask + tvo_map[:8] -= poly_tv_xy_map + tvo_map[-1] /= poly_short_edge_map + tvo_map = tvo_map.transpose((1, 2, 0)) + + tco_map *= poly_mask + tco_map[:2] -= poly_tc_xy_map + tco_map[-1] /= poly_short_edge_map + tco_map = tco_map.transpose((1, 2, 0)) + + return tvo_map, tco_map + + def adjust_point(self, poly): + """ + adjust point order. + """ + point_num = poly.shape[0] + if point_num == 4: + len_1 = np.linalg.norm(poly[0] - poly[1]) + len_2 = np.linalg.norm(poly[1] - poly[2]) + len_3 = np.linalg.norm(poly[2] - poly[3]) + len_4 = np.linalg.norm(poly[3] - poly[0]) + + if (len_1 + len_3) * 1.5 < (len_2 + len_4): + poly = poly[[1, 2, 3, 0], :] + + elif point_num > 4: + vector_1 = poly[0] - poly[1] + vector_2 = poly[1] - poly[2] + cos_theta = np.dot(vector_1, vector_2) / (np.linalg.norm(vector_1) * np.linalg.norm(vector_2) + 1e-6) + theta = np.arccos(np.round(cos_theta, decimals=4)) + + if abs(theta) > (70 / 180 * math.pi): + index = list(range(1, point_num)) + [0] + poly = poly[np.array(index), :] + return poly + + def gen_min_area_quad_from_poly(self, poly): + """ + Generate min area quad from poly. + """ + point_num = poly.shape[0] + min_area_quad = np.zeros((4, 2), dtype=np.float32) + if point_num == 4: + min_area_quad = poly + center_point = np.sum(poly, axis=0) / 4 + else: + rect = cv2.minAreaRect(poly.astype(np.int32)) # (center (x,y), (width, height), angle of rotation) + center_point = rect[0] + box = np.array(cv2.boxPoints(rect)) + + first_point_idx = 0 + min_dist = 1e4 + for i in range(4): + dist = np.linalg.norm(box[(i + 0) % 4] - poly[0]) + \ + np.linalg.norm(box[(i + 1) % 4] - poly[point_num // 2 - 1]) + \ + np.linalg.norm(box[(i + 2) % 4] - poly[point_num // 2]) + \ + np.linalg.norm(box[(i + 3) % 4] - poly[-1]) + if dist < min_dist: + min_dist = dist + first_point_idx = i + + for i in range(4): + min_area_quad[i] = box[(first_point_idx + i) % 4] + + return min_area_quad, center_point + + def shrink_quad_along_width(self, quad, begin_width_ratio=0., end_width_ratio=1.): + """ + Generate shrink_quad_along_width. + """ + ratio_pair = np.array([[begin_width_ratio], [end_width_ratio]], dtype=np.float32) + p0_1 = quad[0] + (quad[1] - quad[0]) * ratio_pair + p3_2 = quad[3] + (quad[2] - quad[3]) * ratio_pair + return np.array([p0_1[0], p0_1[1], p3_2[1], p3_2[0]]) + + def shrink_poly_along_width(self, quads, shrink_ratio_of_width, expand_height_ratio=1.0): + """ + shrink poly with given length. + """ + upper_edge_list = [] + + def get_cut_info(edge_len_list, cut_len): + for idx, edge_len in enumerate(edge_len_list): + cut_len -= edge_len + if cut_len <= 0.000001: + ratio = (cut_len + edge_len_list[idx]) / edge_len_list[idx] + return idx, ratio + + for quad in quads: + upper_edge_len = np.linalg.norm(quad[0] - quad[1]) + upper_edge_list.append(upper_edge_len) + + # length of left edge and right edge. + left_length = np.linalg.norm(quads[0][0] - quads[0][3]) * expand_height_ratio + right_length = np.linalg.norm(quads[-1][1] - quads[-1][2]) * expand_height_ratio + + shrink_length = min(left_length, right_length, sum(upper_edge_list)) * shrink_ratio_of_width + # shrinking length + upper_len_left = shrink_length + upper_len_right = sum(upper_edge_list) - shrink_length + + left_idx, left_ratio = get_cut_info(upper_edge_list, upper_len_left) + left_quad = self.shrink_quad_along_width(quads[left_idx], begin_width_ratio=left_ratio, end_width_ratio=1) + right_idx, right_ratio = get_cut_info(upper_edge_list, upper_len_right) + right_quad = self.shrink_quad_along_width(quads[right_idx], begin_width_ratio=0, end_width_ratio=right_ratio) + + out_quad_list = [] + if left_idx == right_idx: + out_quad_list.append([left_quad[0], right_quad[1], right_quad[2], left_quad[3]]) + else: + out_quad_list.append(left_quad) + for idx in range(left_idx + 1, right_idx): + out_quad_list.append(quads[idx]) + out_quad_list.append(right_quad) + + return np.array(out_quad_list), list(range(left_idx, right_idx + 1)) + + def vector_angle(self, A, B): + """ + Calculate the angle between vector AB and x-axis positive direction. + """ + AB = np.array([B[1] - A[1], B[0] - A[0]]) + return np.arctan2(*AB) + + def theta_line_cross_point(self, theta, point): + """ + Calculate the line through given point and angle in ax + by + c =0 form. + """ + x, y = point + cos = np.cos(theta) + sin = np.sin(theta) + return [sin, -cos, cos * y - sin * x] + + def line_cross_two_point(self, A, B): + """ + Calculate the line through given point A and B in ax + by + c =0 form. + """ + angle = self.vector_angle(A, B) + return self.theta_line_cross_point(angle, A) + + def average_angle(self, poly): + """ + Calculate the average angle between left and right edge in given poly. + """ + p0, p1, p2, p3 = poly + angle30 = self.vector_angle(p3, p0) + angle21 = self.vector_angle(p2, p1) + return (angle30 + angle21) / 2 + + def line_cross_point(self, line1, line2): + """ + line1 and line2 in 0=ax+by+c form, compute the cross point of line1 and line2 + """ + a1, b1, c1 = line1 + a2, b2, c2 = line2 + d = a1 * b2 - a2 * b1 + + if d == 0: + #print("line1", line1) + #print("line2", line2) + print('Cross point does not exist') + return np.array([0, 0], dtype=np.float32) + else: + x = (b1 * c2 - b2 * c1) / d + y = (a2 * c1 - a1 * c2) / d + + return np.array([x, y], dtype=np.float32) + + def quad2tcl(self, poly, ratio): + """ + Generate center line by poly clock-wise point. (4, 2) + """ + ratio_pair = np.array([[0.5 - ratio / 2], [0.5 + ratio / 2]], dtype=np.float32) + p0_3 = poly[0] + (poly[3] - poly[0]) * ratio_pair + p1_2 = poly[1] + (poly[2] - poly[1]) * ratio_pair + return np.array([p0_3[0], p1_2[0], p1_2[1], p0_3[1]]) + + def poly2tcl(self, poly, ratio): + """ + Generate center line by poly clock-wise point. + """ + ratio_pair = np.array([[0.5 - ratio / 2], [0.5 + ratio / 2]], dtype=np.float32) + tcl_poly = np.zeros_like(poly) + point_num = poly.shape[0] + + for idx in range(point_num // 2): + point_pair = poly[idx] + (poly[point_num - 1 - idx] - poly[idx]) * ratio_pair + tcl_poly[idx] = point_pair[0] + tcl_poly[point_num - 1 - idx] = point_pair[1] + return tcl_poly + + def gen_quad_tbo(self, quad, tcl_mask, tbo_map): + """ + Generate tbo_map for give quad. + """ + # upper and lower line function: ax + by + c = 0; + up_line = self.line_cross_two_point(quad[0], quad[1]) + lower_line = self.line_cross_two_point(quad[3], quad[2]) + + quad_h = 0.5 * (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2])) + quad_w = 0.5 * (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3])) + + # average angle of left and right line. + angle = self.average_angle(quad) + + xy_in_poly = np.argwhere(tcl_mask == 1) + for y, x in xy_in_poly: + point = (x, y) + line = self.theta_line_cross_point(angle, point) + cross_point_upper = self.line_cross_point(up_line, line) + cross_point_lower = self.line_cross_point(lower_line, line) + ##FIX, offset reverse + upper_offset_x, upper_offset_y = cross_point_upper - point + lower_offset_x, lower_offset_y = cross_point_lower - point + tbo_map[y, x, 0] = upper_offset_y + tbo_map[y, x, 1] = upper_offset_x + tbo_map[y, x, 2] = lower_offset_y + tbo_map[y, x, 3] = lower_offset_x + tbo_map[y, x, 4] = 1.0 / max(min(quad_h, quad_w), 1.0) * 2 + return tbo_map + + def poly2quads(self, poly): + """ + Split poly into quads. + """ + quad_list = [] + point_num = poly.shape[0] + + # point pair + point_pair_list = [] + for idx in range(point_num // 2): + point_pair = [poly[idx], poly[point_num - 1 - idx]] + point_pair_list.append(point_pair) + + quad_num = point_num // 2 - 1 + for idx in range(quad_num): + # reshape and adjust to clock-wise + quad_list.append((np.array(point_pair_list)[[idx, idx + 1]]).reshape(4, 2)[[0, 2, 3, 1]]) + + return np.array(quad_list) + + def extract_polys(self, poly_txt_path): + """ + Read text_polys, txt_tags, txts from give txt file. + """ + text_polys, txt_tags, txts = [], [], [] + + with open(poly_txt_path) as f: + for line in f.readlines(): + poly_str, txt = line.strip().split('\t') + poly = map(float, poly_str.split(',')) + text_polys.append(np.array(poly, dtype=np.float32).reshape(-1, 2)) + txts.append(txt) + if txt == '###': + txt_tags.append(True) + else: + txt_tags.append(False) + + return np.array(map(np.array, text_polys)), \ + np.array(txt_tags, dtype=np.bool), txts + + def __call__(self, label_infor): + infor = self.convert_label_infor(label_infor) + im_path, text_polys, text_tags, text_strs = infor + im = cv2.imread(im_path) + if im is None: + return None + if text_polys.shape[0] == 0: + return None + + h, w, _ = im.shape + text_polys, text_tags, hv_tags = self.check_and_validate_polys(text_polys, text_tags, (h, w)) + + if text_polys.shape[0] == 0: + return None + + #set aspect ratio and keep area fix + asp_scales = np.arange(1.0, 1.55, 0.1) + asp_scale = np.random.choice(asp_scales) + + if np.random.rand() < 0.5: + asp_scale = 1.0 / asp_scale + asp_scale = math.sqrt(asp_scale) + + asp_wx = asp_scale + asp_hy = 1.0 / asp_scale + im = cv2.resize(im, dsize=None, fx=asp_wx, fy=asp_hy) + text_polys[:, :, 0] *= asp_wx + text_polys[:, :, 1] *= asp_hy + + h, w, _ = im.shape + if max(h, w) > 2048: + rd_scale = 2048.0 / max(h, w) + im = cv2.resize(im, dsize=None, fx=rd_scale, fy=rd_scale) + text_polys *= rd_scale + h, w, _ = im.shape + if min(h, w) < 16: + return None + + #no background + im, text_polys, text_tags, hv_tags, text_strs = self.crop_area(im, \ + text_polys, text_tags, hv_tags, text_strs, crop_background=False) + if text_polys.shape[0] == 0: + return None + #continue for all ignore case + if np.sum((text_tags * 1.0)) >= text_tags.size: + return None + new_h, new_w, _ = im.shape + if (new_h is None) or (new_w is None): + return None + #resize image + std_ratio = float(self.input_size) / max(new_w, new_h) + rand_scales = np.array([0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0, 1.0, 1.0, 1.0, 1.0]) + rz_scale = std_ratio * np.random.choice(rand_scales) + im = cv2.resize(im, dsize=None, fx=rz_scale, fy=rz_scale) + text_polys[:, :, 0] *= rz_scale + text_polys[:, :, 1] *= rz_scale + + #add gaussian blur + if np.random.rand() < 0.1 * 0.5: + ks = np.random.permutation(5)[0] + 1 + ks = int(ks/2)*2 + 1 + im = cv2.GaussianBlur(im, ksize=(ks, ks), sigmaX=0, sigmaY=0) + #add brighter + if np.random.rand() < 0.1 * 0.5: + im = im * (1.0 + np.random.rand() * 0.5) + im = np.clip(im, 0.0, 255.0) + #add darker + if np.random.rand() < 0.1 * 0.5: + im = im * (1.0 - np.random.rand() * 0.5) + im = np.clip(im, 0.0, 255.0) + + # Padding the im to [input_size, input_size] + new_h, new_w, _ = im.shape + if min(new_w, new_h) < self.input_size * 0.5: + return None + + im_padded = np.ones((self.input_size, self.input_size, 3), dtype=np.float32) + im_padded[:, :, 2] = 0.485 * 255 + im_padded[:, :, 1] = 0.456 * 255 + im_padded[:, :, 0] = 0.406 * 255 + + # Random the start position + del_h = self.input_size - new_h + del_w = self.input_size - new_w + sh, sw = 0, 0 + if del_h > 1: + sh = int(np.random.rand() * del_h) + if del_w > 1: + sw = int(np.random.rand() * del_w) + + # Padding + im_padded[sh: sh + new_h, sw: sw + new_w, :] = im.copy() + text_polys[:, :, 0] += sw + text_polys[:, :, 1] += sh + + score_map, border_map, training_mask = self.generate_tcl_label((self.input_size, self.input_size), + text_polys, text_tags, 0.25) + + # SAST head + tvo_map, tco_map = self.generate_tvo_and_tco((self.input_size, self.input_size), text_polys, text_tags, tcl_ratio=0.3, ds_ratio=0.25) + # print("test--------tvo_map shape:", tvo_map.shape) + + im_padded[:, :, 2] -= 0.485 * 255 + im_padded[:, :, 1] -= 0.456 * 255 + im_padded[:, :, 0] -= 0.406 * 255 + im_padded[:, :, 2] /= (255.0 * 0.229) + im_padded[:, :, 1] /= (255.0 * 0.224) + im_padded[:, :, 0] /= (255.0 * 0.225) + im_padded = im_padded.transpose((2, 0, 1)) + + return im_padded[::-1, :, :], score_map[np.newaxis, :, :], border_map.transpose((2, 0, 1)), training_mask[np.newaxis, :, :], tvo_map.transpose((2, 0, 1)), tco_map.transpose((2, 0, 1)) + + +class SASTProcessTest(object): + """ + SAST process function for test + """ + def __init__(self, params): + super(SASTProcessTest, self).__init__() + if 'max_side_len' in params: + self.max_side_len = params['max_side_len'] + else: + self.max_side_len = 2400 + + def resize_image(self, im): + """ + resize image to a size multiple of max_stride which is required by the network + :param im: the resized image + :param max_side_len: limit of max image size to avoid out of memory in gpu + :return: the resized image and the resize ratio + """ + h, w, _ = im.shape + + resize_w = w + resize_h = h + + # Fix the longer side + if resize_h > resize_w: + ratio = float(self.max_side_len) / resize_h + else: + ratio = float(self.max_side_len) / resize_w + + resize_h = int(resize_h * ratio) + resize_w = int(resize_w * ratio) + + max_stride = 128 + resize_h = (resize_h + max_stride - 1) // max_stride * max_stride + resize_w = (resize_w + max_stride - 1) // max_stride * max_stride + im = cv2.resize(im, (int(resize_w), int(resize_h))) + ratio_h = resize_h / float(h) + ratio_w = resize_w / float(w) + + return im, (ratio_h, ratio_w) + + def __call__(self, im): + src_h, src_w, _ = im.shape + im, (ratio_h, ratio_w) = self.resize_image(im) + img_mean = [0.485, 0.456, 0.406] + img_std = [0.229, 0.224, 0.225] + im = im[:, :, ::-1].astype(np.float32) + im = im / 255 + im -= img_mean + im /= img_std + im = im.transpose((2, 0, 1)) + im = im[np.newaxis, :] + return [im, (ratio_h, ratio_w, src_h, src_w)] diff --git a/ppocr/data/rec/dataset_traversal.py b/ppocr/data/rec/dataset_traversal.py index 49608bd9a7e1342f54d2734f7b8f5ba33dbc7f86..5efba512c0e22dda1be17b121c0f12f42b74f2ee 100755 --- a/ppocr/data/rec/dataset_traversal.py +++ b/ppocr/data/rec/dataset_traversal.py @@ -26,7 +26,7 @@ from ppocr.utils.utility import initial_logger from ppocr.utils.utility import get_image_file_list logger = initial_logger() -from .img_tools import process_image, get_img_data +from .img_tools import process_image, process_image_srn, get_img_data class LMDBReader(object): @@ -43,6 +43,9 @@ class LMDBReader(object): self.mode = params['mode'] self.drop_last = False self.use_tps = False + self.num_heads = None + if "num_heads" in params: + self.num_heads = params['num_heads'] if "tps" in params: self.ues_tps = True self.use_distort = False @@ -119,12 +122,19 @@ class LMDBReader(object): img = cv2.imread(single_img) if img.shape[-1] == 1 or len(list(img.shape)) == 2: img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR) - norm_img = process_image( - img=img, - image_shape=self.image_shape, - char_ops=self.char_ops, - tps=self.use_tps, - infer_mode=True) + if self.loss_type == 'srn': + norm_img = process_image_srn( + img=img, + image_shape=self.image_shape, + num_heads=self.num_heads, + max_text_length=self.max_text_length) + else: + norm_img = process_image( + img=img, + image_shape=self.image_shape, + char_ops=self.char_ops, + tps=self.use_tps, + infer_mode=True) yield norm_img else: lmdb_sets = self.load_hierarchical_lmdb_dataset() @@ -144,14 +154,25 @@ class LMDBReader(object): if sample_info is None: continue img, label = sample_info - outs = process_image( - img=img, - image_shape=self.image_shape, - label=label, - char_ops=self.char_ops, - loss_type=self.loss_type, - max_text_length=self.max_text_length, - distort=self.use_distort) + outs = [] + if self.loss_type == "srn": + outs = process_image_srn( + img=img, + image_shape=self.image_shape, + num_heads=self.num_heads, + max_text_length=self.max_text_length, + label=label, + char_ops=self.char_ops, + loss_type=self.loss_type) + + else: + outs = process_image( + img=img, + image_shape=self.image_shape, + label=label, + char_ops=self.char_ops, + loss_type=self.loss_type, + max_text_length=self.max_text_length) if outs is None: continue yield outs diff --git a/ppocr/data/rec/img_tools.py b/ppocr/data/rec/img_tools.py index 0835603b5896a4f7ab946c4a694dcbec3f853a54..527e0266ee33ac81e29b5610ed05f401860078a4 100755 --- a/ppocr/data/rec/img_tools.py +++ b/ppocr/data/rec/img_tools.py @@ -381,3 +381,84 @@ def process_image(img, assert False, "Unsupport loss_type %s in process_image"\ % loss_type return (norm_img) + +def resize_norm_img_srn(img, image_shape): + imgC, imgH, imgW = image_shape + + img_black = np.zeros((imgH, imgW)) + im_hei = img.shape[0] + im_wid = img.shape[1] + + if im_wid <= im_hei * 1: + img_new = cv2.resize(img, (imgH * 1, imgH)) + elif im_wid <= im_hei * 2: + img_new = cv2.resize(img, (imgH * 2, imgH)) + elif im_wid <= im_hei * 3: + img_new = cv2.resize(img, (imgH * 3, imgH)) + else: + img_new = cv2.resize(img, (imgW, imgH)) + + img_np = np.asarray(img_new) + img_np = cv2.cvtColor(img_np, cv2.COLOR_BGR2GRAY) + img_black[:, 0:img_np.shape[1]] = img_np + img_black = img_black[:, :, np.newaxis] + + row, col, c = img_black.shape + c = 1 + + return np.reshape(img_black, (c, row, col)).astype(np.float32) + +def srn_other_inputs(image_shape, + num_heads, + max_text_length): + + imgC, imgH, imgW = image_shape + feature_dim = int((imgH / 8) * (imgW / 8)) + + encoder_word_pos = np.array(range(0, feature_dim)).reshape((feature_dim, 1)).astype('int64') + gsrm_word_pos = np.array(range(0, max_text_length)).reshape((max_text_length, 1)).astype('int64') + + lbl_weight = np.array([37] * max_text_length).reshape((-1,1)).astype('int64') + + gsrm_attn_bias_data = np.ones((1, max_text_length, max_text_length)) + gsrm_slf_attn_bias1 = np.triu(gsrm_attn_bias_data, 1).reshape([-1, 1, max_text_length, max_text_length]) + gsrm_slf_attn_bias1 = np.tile(gsrm_slf_attn_bias1, [1, num_heads, 1, 1]) * [-1e9] + + gsrm_slf_attn_bias2 = np.tril(gsrm_attn_bias_data, -1).reshape([-1, 1, max_text_length, max_text_length]) + gsrm_slf_attn_bias2 = np.tile(gsrm_slf_attn_bias2, [1, num_heads, 1, 1]) * [-1e9] + + encoder_word_pos = encoder_word_pos[np.newaxis, :] + gsrm_word_pos = gsrm_word_pos[np.newaxis, :] + + return [lbl_weight, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2] + +def process_image_srn(img, + image_shape, + num_heads, + max_text_length, + label=None, + char_ops=None, + loss_type=None): + norm_img = resize_norm_img_srn(img, image_shape) + norm_img = norm_img[np.newaxis, :] + [lbl_weight, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2] = \ + srn_other_inputs(image_shape, num_heads, max_text_length) + + if label is not None: + char_num = char_ops.get_char_num() + text = char_ops.encode(label) + if len(text) == 0 or len(text) > max_text_length: + return None + else: + if loss_type == "srn": + text_padded = [37] * max_text_length + for i in range(len(text)): + text_padded[i] = text[i] + lbl_weight[i] = [1.0] + text_padded = np.array(text_padded) + text = text_padded.reshape(-1, 1) + return (norm_img, text,encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2,lbl_weight) + else: + assert False, "Unsupport loss_type %s in process_image"\ + % loss_type + return (norm_img, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2) diff --git a/ppocr/modeling/architectures/det_model.py b/ppocr/modeling/architectures/det_model.py old mode 100755 new mode 100644 index 7ea0949bc80c55724182704616a93b879be92314..54d3a479f40a3f9f6ebb9e6ab739ae7a44796a2e --- a/ppocr/modeling/architectures/det_model.py +++ b/ppocr/modeling/architectures/det_model.py @@ -97,6 +97,23 @@ class DetModel(object): 'shrink_mask':shrink_mask,\ 'threshold_map':threshold_map,\ 'threshold_mask':threshold_mask} + elif self.algorithm == "SAST": + input_score = fluid.layers.data( + name='score', shape=[1, 128, 128], dtype='float32') + input_border = fluid.layers.data( + name='border', shape=[5, 128, 128], dtype='float32') + input_mask = fluid.layers.data( + name='mask', shape=[1, 128, 128], dtype='float32') + input_tvo = fluid.layers.data( + name='tvo', shape=[9, 128, 128], dtype='float32') + input_tco = fluid.layers.data( + name='tco', shape=[3, 128, 128], dtype='float32') + feed_list = [image, input_score, input_border, input_mask, input_tvo, input_tco] + labels = {'input_score': input_score,\ + 'input_border': input_border,\ + 'input_mask': input_mask,\ + 'input_tvo': input_tvo,\ + 'input_tco': input_tco} loader = fluid.io.DataLoader.from_generator( feed_list=feed_list, capacity=64, diff --git a/ppocr/modeling/architectures/rec_model.py b/ppocr/modeling/architectures/rec_model.py index e80a50ab6504c80aa7f10759576208486caf7c3f..fe2d4c16dce3882980fe2238ecc16c7c08a89792 100755 --- a/ppocr/modeling/architectures/rec_model.py +++ b/ppocr/modeling/architectures/rec_model.py @@ -58,6 +58,10 @@ class RecModel(object): self.loss_type = global_params['loss_type'] self.image_shape = global_params['image_shape'] self.max_text_length = global_params['max_text_length'] + if "num_heads" in global_params: + self.num_heads = global_params["num_heads"] + else: + self.num_heads = None def create_feed(self, mode): image_shape = deepcopy(self.image_shape) @@ -77,6 +81,48 @@ class RecModel(object): lod_level=1) feed_list = [image, label_in, label_out] labels = {'label_in': label_in, 'label_out': label_out} + elif self.loss_type == "srn": + encoder_word_pos = fluid.data( + name="encoder_word_pos", + shape=[ + -1, int((image_shape[-2] / 8) * (image_shape[-1] / 8)), + 1 + ], + dtype="int64") + gsrm_word_pos = fluid.data( + name="gsrm_word_pos", + shape=[-1, self.max_text_length, 1], + dtype="int64") + gsrm_slf_attn_bias1 = fluid.data( + name="gsrm_slf_attn_bias1", + shape=[ + -1, self.num_heads, self.max_text_length, + self.max_text_length + ], + dtype="float32") + gsrm_slf_attn_bias2 = fluid.data( + name="gsrm_slf_attn_bias2", + shape=[ + -1, self.num_heads, self.max_text_length, + self.max_text_length + ], + dtype="float32") + lbl_weight = fluid.layers.data( + name="lbl_weight", shape=[-1, 1], dtype='int64') + label = fluid.data( + name='label', shape=[-1, 1], dtype='int32', lod_level=1) + feed_list = [ + image, label, encoder_word_pos, gsrm_word_pos, + gsrm_slf_attn_bias1, gsrm_slf_attn_bias2, lbl_weight + ] + labels = { + 'label': label, + 'encoder_word_pos': encoder_word_pos, + 'gsrm_word_pos': gsrm_word_pos, + 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1, + 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2, + 'lbl_weight': lbl_weight + } else: label = fluid.data( name='label', shape=[None, 1], dtype='int32', lod_level=1) @@ -88,6 +134,8 @@ class RecModel(object): use_double_buffer=True, iterable=False) else: + labels = None + loader = None if self.char_type == "ch" and self.infer_img: image_shape[-1] = -1 if self.tps != None: @@ -98,8 +146,42 @@ class RecModel(object): ) image_shape = deepcopy(self.image_shape) image = fluid.data(name='image', shape=image_shape, dtype='float32') - labels = None - loader = None + if self.loss_type == "srn": + encoder_word_pos = fluid.data( + name="encoder_word_pos", + shape=[ + -1, int((image_shape[-2] / 8) * (image_shape[-1] / 8)), + 1 + ], + dtype="int64") + gsrm_word_pos = fluid.data( + name="gsrm_word_pos", + shape=[-1, self.max_text_length, 1], + dtype="int64") + gsrm_slf_attn_bias1 = fluid.data( + name="gsrm_slf_attn_bias1", + shape=[ + -1, self.num_heads, self.max_text_length, + self.max_text_length + ], + dtype="float32") + gsrm_slf_attn_bias2 = fluid.data( + name="gsrm_slf_attn_bias2", + shape=[ + -1, self.num_heads, self.max_text_length, + self.max_text_length + ], + dtype="float32") + feed_list = [ + image, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, + gsrm_slf_attn_bias2 + ] + labels = { + 'encoder_word_pos': encoder_word_pos, + 'gsrm_word_pos': gsrm_word_pos, + 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1, + 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2 + } return image, labels, loader def __call__(self, mode): @@ -117,13 +199,27 @@ class RecModel(object): label = labels['label_out'] else: label = labels['label'] - outputs = {'total_loss':loss, 'decoded_out':\ - decoded_out, 'label':label} + if self.loss_type == 'srn': + total_loss, img_loss, word_loss = self.loss(predicts, labels) + outputs = { + 'total_loss': total_loss, + 'img_loss': img_loss, + 'word_loss': word_loss, + 'decoded_out': decoded_out, + 'label': label + } + else: + outputs = {'total_loss':loss, 'decoded_out':\ + decoded_out, 'label':label} return loader, outputs + elif mode == "export": predict = predicts['predict'] if self.loss_type == "ctc": predict = fluid.layers.softmax(predict) + if self.loss_type == "srn": + raise Exception( + "Warning! SRN does not support export model currently") return [image, {'decoded_out': decoded_out, 'predicts': predict}] else: predict = predicts['predict'] diff --git a/ppocr/modeling/backbones/det_resnet_vd_sast.py b/ppocr/modeling/backbones/det_resnet_vd_sast.py new file mode 100644 index 0000000000000000000000000000000000000000..14fe713852d009bef8dd7f3739c2e8070789e359 --- /dev/null +++ b/ppocr/modeling/backbones/det_resnet_vd_sast.py @@ -0,0 +1,274 @@ +#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import paddle.fluid as fluid +from paddle.fluid.param_attr import ParamAttr + +__all__ = ["ResNet"] + + +class ResNet(object): + def __init__(self, params): + """ + the Resnet backbone network for detection module. + Args: + params(dict): the super parameters for network build + """ + self.layers = params['layers'] + supported_layers = [18, 34, 50, 101, 152] + assert self.layers in supported_layers, \ + "supported layers are {} but input layer is {}".format(supported_layers, self.layers) + self.is_3x3 = True + + def __call__(self, input): + layers = self.layers + is_3x3 = self.is_3x3 + # if layers == 18: + # depth = [2, 2, 2, 2] + # elif layers == 34 or layers == 50: + # depth = [3, 4, 6, 3] + # elif layers == 101: + # depth = [3, 4, 23, 3] + # elif layers == 152: + # depth = [3, 8, 36, 3] + # elif layers == 200: + # depth = [3, 12, 48, 3] + # num_filters = [64, 128, 256, 512] + # outs = [] + + if layers == 18: + depth = [2, 2, 2, 2]#, 3, 3] + elif layers == 34 or layers == 50: + #depth = [3, 4, 6, 3]#, 3, 3] + depth = [3, 4, 6, 3, 3]#, 3] + elif layers == 101: + depth = [3, 4, 23, 3]#, 3, 3] + elif layers == 152: + depth = [3, 8, 36, 3]#, 3, 3] + num_filters = [64, 128, 256, 512, 512]#, 512] + blocks = {} + + idx = 'block_0' + blocks[idx] = input + + if is_3x3 == False: + conv = self.conv_bn_layer( + input=input, + num_filters=64, + filter_size=7, + stride=2, + act='relu') + else: + conv = self.conv_bn_layer( + input=input, + num_filters=32, + filter_size=3, + stride=2, + act='relu', + name='conv1_1') + conv = self.conv_bn_layer( + input=conv, + num_filters=32, + filter_size=3, + stride=1, + act='relu', + name='conv1_2') + conv = self.conv_bn_layer( + input=conv, + num_filters=64, + filter_size=3, + stride=1, + act='relu', + name='conv1_3') + idx = 'block_1' + blocks[idx] = conv + + conv = fluid.layers.pool2d( + input=conv, + pool_size=3, + pool_stride=2, + pool_padding=1, + pool_type='max') + + if layers >= 50: + for block in range(len(depth)): + for i in range(depth[block]): + if layers in [101, 152, 200] and block == 2: + if i == 0: + conv_name = "res" + str(block + 2) + "a" + else: + conv_name = "res" + str(block + 2) + "b" + str(i) + else: + conv_name = "res" + str(block + 2) + chr(97 + i) + conv = self.bottleneck_block( + input=conv, + num_filters=num_filters[block], + stride=2 if i == 0 and block != 0 else 1, + if_first=block == i == 0, + name=conv_name) + # outs.append(conv) + idx = 'block_' + str(block + 2) + blocks[idx] = conv + else: + for block in range(len(depth)): + for i in range(depth[block]): + conv_name = "res" + str(block + 2) + chr(97 + i) + conv = self.basic_block( + input=conv, + num_filters=num_filters[block], + stride=2 if i == 0 and block != 0 else 1, + if_first=block == i == 0, + name=conv_name) + # outs.append(conv) + idx = 'block_' + str(block + 2) + blocks[idx] = conv + # return outs + return blocks + + def conv_bn_layer(self, + input, + num_filters, + filter_size, + stride=1, + groups=1, + act=None, + name=None): + conv = fluid.layers.conv2d( + input=input, + num_filters=num_filters, + filter_size=filter_size, + stride=stride, + padding=(filter_size - 1) // 2, + groups=groups, + act=None, + param_attr=ParamAttr(name=name + "_weights"), + bias_attr=False) + if name == "conv1": + bn_name = "bn_" + name + else: + bn_name = "bn" + name[3:] + return fluid.layers.batch_norm( + input=conv, + act=act, + param_attr=ParamAttr(name=bn_name + '_scale'), + bias_attr=ParamAttr(bn_name + '_offset'), + moving_mean_name=bn_name + '_mean', + moving_variance_name=bn_name + '_variance') + + def conv_bn_layer_new(self, + input, + num_filters, + filter_size, + stride=1, + groups=1, + act=None, + name=None): + pool = fluid.layers.pool2d( + input=input, + pool_size=2, + pool_stride=2, + pool_padding=0, + pool_type='avg', + ceil_mode=True) + + conv = fluid.layers.conv2d( + input=pool, + num_filters=num_filters, + filter_size=filter_size, + stride=1, + padding=(filter_size - 1) // 2, + groups=groups, + act=None, + param_attr=ParamAttr(name=name + "_weights"), + bias_attr=False) + if name == "conv1": + bn_name = "bn_" + name + else: + bn_name = "bn" + name[3:] + return fluid.layers.batch_norm( + input=conv, + act=act, + param_attr=ParamAttr(name=bn_name + '_scale'), + bias_attr=ParamAttr(bn_name + '_offset'), + moving_mean_name=bn_name + '_mean', + moving_variance_name=bn_name + '_variance') + + def shortcut(self, input, ch_out, stride, name, if_first=False): + ch_in = input.shape[1] + if ch_in != ch_out or stride != 1: + if if_first: + return self.conv_bn_layer(input, ch_out, 1, stride, name=name) + else: + return self.conv_bn_layer_new( + input, ch_out, 1, stride, name=name) + elif if_first: + return self.conv_bn_layer(input, ch_out, 1, stride, name=name) + else: + return input + + def bottleneck_block(self, input, num_filters, stride, name, if_first): + conv0 = self.conv_bn_layer( + input=input, + num_filters=num_filters, + filter_size=1, + act='relu', + name=name + "_branch2a") + conv1 = self.conv_bn_layer( + input=conv0, + num_filters=num_filters, + filter_size=3, + stride=stride, + act='relu', + name=name + "_branch2b") + conv2 = self.conv_bn_layer( + input=conv1, + num_filters=num_filters * 4, + filter_size=1, + act=None, + name=name + "_branch2c") + + short = self.shortcut( + input, + num_filters * 4, + stride, + if_first=if_first, + name=name + "_branch1") + + return fluid.layers.elementwise_add(x=short, y=conv2, act='relu') + + def basic_block(self, input, num_filters, stride, name, if_first): + conv0 = self.conv_bn_layer( + input=input, + num_filters=num_filters, + filter_size=3, + act='relu', + stride=stride, + name=name + "_branch2a") + conv1 = self.conv_bn_layer( + input=conv0, + num_filters=num_filters, + filter_size=3, + act=None, + name=name + "_branch2b") + short = self.shortcut( + input, + num_filters, + stride, + if_first=if_first, + name=name + "_branch1") + return fluid.layers.elementwise_add(x=short, y=conv1, act='relu') diff --git a/ppocr/modeling/backbones/rec_resnet50_fpn.py b/ppocr/modeling/backbones/rec_resnet50_fpn.py new file mode 100755 index 0000000000000000000000000000000000000000..f6d72377fe4e2d3355a4510f070178ad48dd2a27 --- /dev/null +++ b/ppocr/modeling/backbones/rec_resnet50_fpn.py @@ -0,0 +1,172 @@ +#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import math + +import paddle +import paddle.fluid as fluid +from paddle.fluid.param_attr import ParamAttr + + +__all__ = ["ResNet", "ResNet18", "ResNet34", "ResNet50", "ResNet101", "ResNet152"] + +Trainable = True +w_nolr = fluid.ParamAttr( + trainable = Trainable) +train_parameters = { + "input_size": [3, 224, 224], + "input_mean": [0.485, 0.456, 0.406], + "input_std": [0.229, 0.224, 0.225], + "learning_strategy": { + "name": "piecewise_decay", + "batch_size": 256, + "epochs": [30, 60, 90], + "steps": [0.1, 0.01, 0.001, 0.0001] + } +} + +class ResNet(): + def __init__(self, params): + self.layers = params['layers'] + self.params = train_parameters + + + def __call__(self, input): + layers = self.layers + supported_layers = [18, 34, 50, 101, 152] + assert layers in supported_layers, \ + "supported layers are {} but input layer is {}".format(supported_layers, layers) + + if layers == 18: + depth = [2, 2, 2, 2] + elif layers == 34 or layers == 50: + depth = [3, 4, 6, 3] + elif layers == 101: + depth = [3, 4, 23, 3] + elif layers == 152: + depth = [3, 8, 36, 3] + stride_list = [(2,2),(2,2),(1,1),(1,1)] + num_filters = [64, 128, 256, 512] + + conv = self.conv_bn_layer( + input=input, num_filters=64, filter_size=7, stride=2, act='relu', name="conv1") + F = [] + if layers >= 50: + for block in range(len(depth)): + for i in range(depth[block]): + if layers in [101, 152] and block == 2: + if i == 0: + conv_name = "res" + str(block + 2) + "a" + else: + conv_name = "res" + str(block + 2) + "b" + str(i) + else: + conv_name = "res" + str(block + 2) + chr(97 + i) + conv = self.bottleneck_block( + input=conv, + num_filters=num_filters[block], + stride=stride_list[block] if i == 0 else 1, name=conv_name) + F.append(conv) + + base = F[-1] + for i in [-2, -3]: + b, c, w, h = F[i].shape + if (w,h) == base.shape[2:]: + base = base + else: + base = fluid.layers.conv2d_transpose( input=base, num_filters=c,filter_size=4, stride=2, + padding=1,act=None, + param_attr=w_nolr, + bias_attr=w_nolr) + base = fluid.layers.batch_norm(base, act = "relu", param_attr=w_nolr, bias_attr=w_nolr) + base = fluid.layers.concat([base, F[i]], axis=1) + base = fluid.layers.conv2d(base, num_filters=c, filter_size=1, param_attr=w_nolr, bias_attr=w_nolr) + base = fluid.layers.conv2d(base, num_filters=c, filter_size=3,padding = 1, param_attr=w_nolr, bias_attr=w_nolr) + base = fluid.layers.batch_norm(base, act = "relu", param_attr=w_nolr, bias_attr=w_nolr) + + base = fluid.layers.conv2d(base, num_filters=512, filter_size=1,bias_attr=w_nolr,param_attr=w_nolr) + + return base + + def conv_bn_layer(self, + input, + num_filters, + filter_size, + stride=1, + groups=1, + act=None, + name=None): + conv = fluid.layers.conv2d( + input=input, + num_filters=num_filters, + filter_size= 2 if stride==(1,1) else filter_size, + dilation = 2 if stride==(1,1) else 1, + stride=stride, + padding=(filter_size - 1) // 2, + groups=groups, + act=None, + param_attr=ParamAttr(name=name + "_weights",trainable = Trainable), + bias_attr=False, + name=name + '.conv2d.output.1') + + if name == "conv1": + bn_name = "bn_" + name + else: + bn_name = "bn" + name[3:] + return fluid.layers.batch_norm(input=conv, + act=act, + name=bn_name + '.output.1', + param_attr=ParamAttr(name=bn_name + '_scale',trainable = Trainable), + bias_attr=ParamAttr(bn_name + '_offset',trainable = Trainable), + moving_mean_name=bn_name + '_mean', + moving_variance_name=bn_name + '_variance', ) + + def shortcut(self, input, ch_out, stride, is_first, name): + ch_in = input.shape[1] + if ch_in != ch_out or stride != 1 or is_first == True: + if stride == (1,1): + return self.conv_bn_layer(input, ch_out, 1, 1, name=name) + else: #stride == (2,2) + return self.conv_bn_layer(input, ch_out, 1, stride, name=name) + + else: + return input + + def bottleneck_block(self, input, num_filters, stride, name): + conv0 = self.conv_bn_layer( + input=input, num_filters=num_filters, filter_size=1, act='relu', name=name + "_branch2a") + conv1 = self.conv_bn_layer( + input=conv0, + num_filters=num_filters, + filter_size=3, + stride=stride, + act='relu', + name=name + "_branch2b") + conv2 = self.conv_bn_layer( + input=conv1, num_filters=num_filters * 4, filter_size=1, act=None, name=name + "_branch2c") + + short = self.shortcut(input, num_filters * 4, stride, is_first=False, name=name + "_branch1") + + return fluid.layers.elementwise_add(x=short, y=conv2, act='relu', name=name + ".add.output.5") + + def basic_block(self, input, num_filters, stride, is_first, name): + conv0 = self.conv_bn_layer(input=input, num_filters=num_filters, filter_size=3, act='relu', stride=stride, + name=name + "_branch2a") + conv1 = self.conv_bn_layer(input=conv0, num_filters=num_filters, filter_size=3, act=None, + name=name + "_branch2b") + short = self.shortcut(input, num_filters, stride, is_first, name=name + "_branch1") + return fluid.layers.elementwise_add(x=short, y=conv1, act='relu') diff --git a/ppocr/modeling/heads/det_sast_head.py b/ppocr/modeling/heads/det_sast_head.py new file mode 100644 index 0000000000000000000000000000000000000000..b5e19b844abda003d65a1f95026685cfe0cfffd6 --- /dev/null +++ b/ppocr/modeling/heads/det_sast_head.py @@ -0,0 +1,228 @@ +#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import paddle.fluid as fluid +from ..common_functions import conv_bn_layer, deconv_bn_layer +from collections import OrderedDict + + +class SASTHead(object): + """ + SAST: + see arxiv: https://arxiv.org/abs/1908.05498 + args: + params(dict): the super parameters for network build + """ + + def __init__(self, params): + self.model_name = params['model_name'] + self.with_cab = params['with_cab'] + + def FPN_Up_Fusion(self, blocks): + """ + blocks{}: contain block_2, block_3, block_4, block_5, block_6, block_7 with + 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 resolution. + """ + f = [blocks['block_6'], blocks['block_5'], blocks['block_4'], blocks['block_3'], blocks['block_2']] + num_outputs = [256, 256, 192, 192, 128] + g = [None, None, None, None, None] + h = [None, None, None, None, None] + for i in range(5): + h[i] = conv_bn_layer(input=f[i], num_filters=num_outputs[i], + filter_size=1, stride=1, act=None, name='fpn_up_h'+str(i)) + + for i in range(4): + if i == 0: + g[i] = deconv_bn_layer(input=h[i], num_filters=num_outputs[i + 1], act=None, name='fpn_up_g0') + print("g[{}] shape: {}".format(i, g[i].shape)) + else: + g[i] = fluid.layers.elementwise_add(x=g[i - 1], y=h[i]) + g[i] = fluid.layers.relu(g[i]) + #g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i], + # filter_size=1, stride=1, act='relu') + g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i], + filter_size=3, stride=1, act='relu', name='fpn_up_g%d_1'%i) + g[i] = deconv_bn_layer(input=g[i], num_filters=num_outputs[i + 1], act=None, name='fpn_up_g%d_2'%i) + print("g[{}] shape: {}".format(i, g[i].shape)) + + g[4] = fluid.layers.elementwise_add(x=g[3], y=h[4]) + g[4] = fluid.layers.relu(g[4]) + g[4] = conv_bn_layer(input=g[4], num_filters=num_outputs[4], + filter_size=3, stride=1, act='relu', name='fpn_up_fusion_1') + g[4] = conv_bn_layer(input=g[4], num_filters=num_outputs[4], + filter_size=1, stride=1, act=None, name='fpn_up_fusion_2') + + return g[4] + + def FPN_Down_Fusion(self, blocks): + """ + blocks{}: contain block_2, block_3, block_4, block_5, block_6, block_7 with + 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 resolution. + """ + f = [blocks['block_0'], blocks['block_1'], blocks['block_2']] + num_outputs = [32, 64, 128] + g = [None, None, None] + h = [None, None, None] + for i in range(3): + h[i] = conv_bn_layer(input=f[i], num_filters=num_outputs[i], + filter_size=3, stride=1, act=None, name='fpn_down_h'+str(i)) + for i in range(2): + if i == 0: + g[i] = conv_bn_layer(input=h[i], num_filters=num_outputs[i+1], filter_size=3, stride=2, act=None, name='fpn_down_g0') + else: + g[i] = fluid.layers.elementwise_add(x=g[i - 1], y=h[i]) + g[i] = fluid.layers.relu(g[i]) + g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i], filter_size=3, stride=1, act='relu', name='fpn_down_g%d_1'%i) + g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i+1], filter_size=3, stride=2, act=None, name='fpn_down_g%d_2'%i) + # print("g[{}] shape: {}".format(i, g[i].shape)) + g[2] = fluid.layers.elementwise_add(x=g[1], y=h[2]) + g[2] = fluid.layers.relu(g[2]) + g[2] = conv_bn_layer(input=g[2], num_filters=num_outputs[2], + filter_size=3, stride=1, act='relu', name='fpn_down_fusion_1') + g[2] = conv_bn_layer(input=g[2], num_filters=num_outputs[2], + filter_size=1, stride=1, act=None, name='fpn_down_fusion_2') + return g[2] + + def SAST_Header1(self, f_common): + """Detector header.""" + #f_score + f_score = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_score1') + f_score = conv_bn_layer(input=f_score, num_filters=64, filter_size=3, stride=1, act='relu', name='f_score2') + f_score = conv_bn_layer(input=f_score, num_filters=128, filter_size=1, stride=1, act='relu', name='f_score3') + f_score = conv_bn_layer(input=f_score, num_filters=1, filter_size=3, stride=1, name='f_score4') + f_score = fluid.layers.sigmoid(f_score) + # print("f_score shape: {}".format(f_score.shape)) + + #f_boder + f_border = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_border1') + f_border = conv_bn_layer(input=f_border, num_filters=64, filter_size=3, stride=1, act='relu', name='f_border2') + f_border = conv_bn_layer(input=f_border, num_filters=128, filter_size=1, stride=1, act='relu', name='f_border3') + f_border = conv_bn_layer(input=f_border, num_filters=4, filter_size=3, stride=1, name='f_border4') + # print("f_border shape: {}".format(f_border.shape)) + + return f_score, f_border + + def SAST_Header2(self, f_common): + """Detector header.""" + #f_tvo + f_tvo = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_tvo1') + f_tvo = conv_bn_layer(input=f_tvo, num_filters=64, filter_size=3, stride=1, act='relu', name='f_tvo2') + f_tvo = conv_bn_layer(input=f_tvo, num_filters=128, filter_size=1, stride=1, act='relu', name='f_tvo3') + f_tvo = conv_bn_layer(input=f_tvo, num_filters=8, filter_size=3, stride=1, name='f_tvo4') + # print("f_tvo shape: {}".format(f_tvo.shape)) + + #f_tco + f_tco = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_tco1') + f_tco = conv_bn_layer(input=f_tco, num_filters=64, filter_size=3, stride=1, act='relu', name='f_tco2') + f_tco = conv_bn_layer(input=f_tco, num_filters=128, filter_size=1, stride=1, act='relu', name='f_tco3') + f_tco = conv_bn_layer(input=f_tco, num_filters=2, filter_size=3, stride=1, name='f_tco4') + # print("f_tco shape: {}".format(f_tco.shape)) + + return f_tvo, f_tco + + def cross_attention(self, f_common): + """ + """ + f_shape = fluid.layers.shape(f_common) + f_theta = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_theta') + f_phi = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_phi') + f_g = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_g') + ### horizon + fh_theta = f_theta + fh_phi = f_phi + fh_g = f_g + #flatten + fh_theta = fluid.layers.transpose(fh_theta, [0, 2, 3, 1]) + fh_theta = fluid.layers.reshape(fh_theta, [f_shape[0] * f_shape[2], f_shape[3], 128]) + fh_phi = fluid.layers.transpose(fh_phi, [0, 2, 3, 1]) + fh_phi = fluid.layers.reshape(fh_phi, [f_shape[0] * f_shape[2], f_shape[3], 128]) + fh_g = fluid.layers.transpose(fh_g, [0, 2, 3, 1]) + fh_g = fluid.layers.reshape(fh_g, [f_shape[0] * f_shape[2], f_shape[3], 128]) + #correlation + fh_attn = fluid.layers.matmul(fh_theta, fluid.layers.transpose(fh_phi, [0, 2, 1])) + #scale + fh_attn = fh_attn / (128 ** 0.5) + fh_attn = fluid.layers.softmax(fh_attn) + #weighted sum + fh_weight = fluid.layers.matmul(fh_attn, fh_g) + fh_weight = fluid.layers.reshape(fh_weight, [f_shape[0], f_shape[2], f_shape[3], 128]) + # print("fh_weight: {}".format(fh_weight.shape)) + fh_weight = fluid.layers.transpose(fh_weight, [0, 3, 1, 2]) + fh_weight = conv_bn_layer(input=fh_weight, num_filters=128, filter_size=1, stride=1, name='fh_weight') + #short cut + fh_sc = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, name='fh_sc') + f_h = fluid.layers.relu(fh_weight + fh_sc) + ###### + #vertical + fv_theta = fluid.layers.transpose(f_theta, [0, 1, 3, 2]) + fv_phi = fluid.layers.transpose(f_phi, [0, 1, 3, 2]) + fv_g = fluid.layers.transpose(f_g, [0, 1, 3, 2]) + #flatten + fv_theta = fluid.layers.transpose(fv_theta, [0, 2, 3, 1]) + fv_theta = fluid.layers.reshape(fv_theta, [f_shape[0] * f_shape[3], f_shape[2], 128]) + fv_phi = fluid.layers.transpose(fv_phi, [0, 2, 3, 1]) + fv_phi = fluid.layers.reshape(fv_phi, [f_shape[0] * f_shape[3], f_shape[2], 128]) + fv_g = fluid.layers.transpose(fv_g, [0, 2, 3, 1]) + fv_g = fluid.layers.reshape(fv_g, [f_shape[0] * f_shape[3], f_shape[2], 128]) + #correlation + fv_attn = fluid.layers.matmul(fv_theta, fluid.layers.transpose(fv_phi, [0, 2, 1])) + #scale + fv_attn = fv_attn / (128 ** 0.5) + fv_attn = fluid.layers.softmax(fv_attn) + #weighted sum + fv_weight = fluid.layers.matmul(fv_attn, fv_g) + fv_weight = fluid.layers.reshape(fv_weight, [f_shape[0], f_shape[3], f_shape[2], 128]) + # print("fv_weight: {}".format(fv_weight.shape)) + fv_weight = fluid.layers.transpose(fv_weight, [0, 3, 2, 1]) + fv_weight = conv_bn_layer(input=fv_weight, num_filters=128, filter_size=1, stride=1, name='fv_weight') + #short cut + fv_sc = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, name='fv_sc') + f_v = fluid.layers.relu(fv_weight + fv_sc) + ###### + f_attn = fluid.layers.concat([f_h, f_v], axis=1) + f_attn = conv_bn_layer(input=f_attn, num_filters=128, filter_size=1, stride=1, act='relu', name='f_attn') + return f_attn + + def __call__(self, blocks, with_cab=False): + # for k, v in blocks.items(): + # print(k, v.shape) + + #down fpn + f_down = self.FPN_Down_Fusion(blocks) + # print("f_down shape: {}".format(f_down.shape)) + #up fpn + f_up = self.FPN_Up_Fusion(blocks) + # print("f_up shape: {}".format(f_up.shape)) + #fusion + f_common = fluid.layers.elementwise_add(x=f_down, y=f_up) + f_common = fluid.layers.relu(f_common) + # print("f_common: {}".format(f_common.shape)) + + if self.with_cab: + # print('enhence f_common with CAB.') + f_common = self.cross_attention(f_common) + + f_score, f_border= self.SAST_Header1(f_common) + f_tvo, f_tco = self.SAST_Header2(f_common) + + predicts = OrderedDict() + predicts['f_score'] = f_score + predicts['f_border'] = f_border + predicts['f_tvo'] = f_tvo + predicts['f_tco'] = f_tco + return predicts \ No newline at end of file diff --git a/ppocr/modeling/heads/rec_srn_all_head.py b/ppocr/modeling/heads/rec_srn_all_head.py new file mode 100755 index 0000000000000000000000000000000000000000..e1bb955d437faca243e3768e24b47f4328218624 --- /dev/null +++ b/ppocr/modeling/heads/rec_srn_all_head.py @@ -0,0 +1,230 @@ +#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import math + +import paddle +import paddle.fluid as fluid +from paddle.fluid.param_attr import ParamAttr +import numpy as np +from .self_attention.model import wrap_encoder +from .self_attention.model import wrap_encoder_forFeature +gradient_clip = 10 + + +class SRNPredict(object): + def __init__(self, params): + super(SRNPredict, self).__init__() + self.char_num = params['char_num'] + self.max_length = params['max_text_length'] + + self.num_heads = params['num_heads'] + self.num_encoder_TUs = params['num_encoder_TUs'] + self.num_decoder_TUs = params['num_decoder_TUs'] + self.hidden_dims = params['hidden_dims'] + + def pvam(self, inputs, others): + + b, c, h, w = inputs.shape + conv_features = fluid.layers.reshape(x=inputs, shape=[-1, c, h * w]) + conv_features = fluid.layers.transpose(x=conv_features, perm=[0, 2, 1]) + + #===== Transformer encoder ===== + b, t, c = conv_features.shape + encoder_word_pos = others["encoder_word_pos"] + gsrm_word_pos = others["gsrm_word_pos"] + + enc_inputs = [conv_features, encoder_word_pos, None] + word_features = wrap_encoder_forFeature( + src_vocab_size=-1, + max_length=t, + n_layer=self.num_encoder_TUs, + n_head=self.num_heads, + d_key=int(self.hidden_dims / self.num_heads), + d_value=int(self.hidden_dims / self.num_heads), + d_model=self.hidden_dims, + d_inner_hid=self.hidden_dims, + prepostprocess_dropout=0.1, + attention_dropout=0.1, + relu_dropout=0.1, + preprocess_cmd="n", + postprocess_cmd="da", + weight_sharing=True, + enc_inputs=enc_inputs, ) + fluid.clip.set_gradient_clip( + fluid.clip.GradientClipByValue(gradient_clip)) + + #===== Parallel Visual Attention Module ===== + b, t, c = word_features.shape + + word_features = fluid.layers.fc(word_features, c, num_flatten_dims=2) + word_features_ = fluid.layers.reshape(word_features, [-1, 1, t, c]) + word_features_ = fluid.layers.expand(word_features_, + [1, self.max_length, 1, 1]) + word_pos_feature = fluid.layers.embedding(gsrm_word_pos, + [self.max_length, c]) + word_pos_ = fluid.layers.reshape(word_pos_feature, + [-1, self.max_length, 1, c]) + word_pos_ = fluid.layers.expand(word_pos_, [1, 1, t, 1]) + temp = fluid.layers.elementwise_add( + word_features_, word_pos_, act='tanh') + + attention_weight = fluid.layers.fc(input=temp, + size=1, + num_flatten_dims=3, + bias_attr=False) + attention_weight = fluid.layers.reshape( + x=attention_weight, shape=[-1, self.max_length, t]) + attention_weight = fluid.layers.softmax(input=attention_weight, axis=-1) + + pvam_features = fluid.layers.matmul(attention_weight, + word_features) #[b, max_length, c] + + return pvam_features + + def gsrm(self, pvam_features, others): + + #===== GSRM Visual-to-semantic embedding block ===== + b, t, c = pvam_features.shape + word_out = fluid.layers.fc( + input=fluid.layers.reshape(pvam_features, [-1, c]), + size=self.char_num, + act="softmax") + #word_out.stop_gradient = True + word_ids = fluid.layers.argmax(word_out, axis=1) + word_ids.stop_gradient = True + word_ids = fluid.layers.reshape(x=word_ids, shape=[-1, t, 1]) + + #===== GSRM Semantic reasoning block ===== + """ + This module is achieved through bi-transformers, + ngram_feature1 is the froward one, ngram_fetaure2 is the backward one + """ + pad_idx = self.char_num + gsrm_word_pos = others["gsrm_word_pos"] + gsrm_slf_attn_bias1 = others["gsrm_slf_attn_bias1"] + gsrm_slf_attn_bias2 = others["gsrm_slf_attn_bias2"] + + def prepare_bi(word_ids): + """ + prepare bi for gsrm + word1 for forward; word2 for backward + """ + word1 = fluid.layers.cast(word_ids, "float32") + word1 = fluid.layers.pad(word1, [0, 0, 1, 0, 0, 0], + pad_value=1.0 * pad_idx) + word1 = fluid.layers.cast(word1, "int64") + word1 = word1[:, :-1, :] + word2 = word_ids + return word1, word2 + + word1, word2 = prepare_bi(word_ids) + word1.stop_gradient = True + word2.stop_gradient = True + enc_inputs_1 = [word1, gsrm_word_pos, gsrm_slf_attn_bias1] + enc_inputs_2 = [word2, gsrm_word_pos, gsrm_slf_attn_bias2] + + gsrm_feature1 = wrap_encoder( + src_vocab_size=self.char_num + 1, + max_length=self.max_length, + n_layer=self.num_decoder_TUs, + n_head=self.num_heads, + d_key=int(self.hidden_dims / self.num_heads), + d_value=int(self.hidden_dims / self.num_heads), + d_model=self.hidden_dims, + d_inner_hid=self.hidden_dims, + prepostprocess_dropout=0.1, + attention_dropout=0.1, + relu_dropout=0.1, + preprocess_cmd="n", + postprocess_cmd="da", + weight_sharing=True, + enc_inputs=enc_inputs_1, ) + gsrm_feature2 = wrap_encoder( + src_vocab_size=self.char_num + 1, + max_length=self.max_length, + n_layer=self.num_decoder_TUs, + n_head=self.num_heads, + d_key=int(self.hidden_dims / self.num_heads), + d_value=int(self.hidden_dims / self.num_heads), + d_model=self.hidden_dims, + d_inner_hid=self.hidden_dims, + prepostprocess_dropout=0.1, + attention_dropout=0.1, + relu_dropout=0.1, + preprocess_cmd="n", + postprocess_cmd="da", + weight_sharing=True, + enc_inputs=enc_inputs_2, ) + gsrm_feature2 = fluid.layers.pad(gsrm_feature2, [0, 0, 0, 1, 0, 0], + pad_value=0.) + gsrm_feature2 = gsrm_feature2[:, 1:, ] + gsrm_features = gsrm_feature1 + gsrm_feature2 + + b, t, c = gsrm_features.shape + + gsrm_out = fluid.layers.matmul( + x=gsrm_features, + y=fluid.default_main_program().global_block().var( + "src_word_emb_table"), + transpose_y=True) + b, t, c = gsrm_out.shape + gsrm_out = fluid.layers.softmax(input=fluid.layers.reshape(gsrm_out, + [-1, c])) + + return gsrm_features, word_out, gsrm_out + + def vsfd(self, pvam_features, gsrm_features): + + #===== Visual-Semantic Fusion Decoder Module ===== + b, t, c1 = pvam_features.shape + b, t, c2 = gsrm_features.shape + combine_features_ = fluid.layers.concat( + [pvam_features, gsrm_features], axis=2) + img_comb_features_ = fluid.layers.reshape( + x=combine_features_, shape=[-1, c1 + c2]) + img_comb_features_map = fluid.layers.fc(input=img_comb_features_, + size=c1, + act="sigmoid") + img_comb_features_map = fluid.layers.reshape( + x=img_comb_features_map, shape=[-1, t, c1]) + combine_features = img_comb_features_map * pvam_features + ( + 1.0 - img_comb_features_map) * gsrm_features + img_comb_features = fluid.layers.reshape( + x=combine_features, shape=[-1, c1]) + + fc_out = fluid.layers.fc(input=img_comb_features, + size=self.char_num, + act="softmax") + return fc_out + + def __call__(self, inputs, others, mode=None): + + pvam_features = self.pvam(inputs, others) + gsrm_features, word_out, gsrm_out = self.gsrm(pvam_features, others) + final_out = self.vsfd(pvam_features, gsrm_features) + + _, decoded_out = fluid.layers.topk(input=final_out, k=1) + predicts = { + 'predict': final_out, + 'decoded_out': decoded_out, + 'word_out': word_out, + 'gsrm_out': gsrm_out + } + + return predicts diff --git a/ppocr/modeling/heads/self_attention/__init__.py b/ppocr/modeling/heads/self_attention/__init__.py new file mode 100644 index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391 diff --git a/ppocr/modeling/heads/self_attention/model.py b/ppocr/modeling/heads/self_attention/model.py new file mode 100644 index 0000000000000000000000000000000000000000..8ac1458b7dca3dedc368a16fa00f52a9aa4f4f93 --- /dev/null +++ b/ppocr/modeling/heads/self_attention/model.py @@ -0,0 +1,1058 @@ +from functools import partial +import numpy as np + +import paddle.fluid as fluid +import paddle.fluid.layers as layers + +# Set seed for CE +dropout_seed = None + + +def wrap_layer_with_block(layer, block_idx): + """ + Make layer define support indicating block, by which we can add layers + to other blocks within current block. This will make it easy to define + cache among while loop. + """ + + class BlockGuard(object): + """ + BlockGuard class. + + BlockGuard class is used to switch to the given block in a program by + using the Python `with` keyword. + """ + + def __init__(self, block_idx=None, main_program=None): + self.main_program = fluid.default_main_program( + ) if main_program is None else main_program + self.old_block_idx = self.main_program.current_block().idx + self.new_block_idx = block_idx + + def __enter__(self): + self.main_program.current_block_idx = self.new_block_idx + + def __exit__(self, exc_type, exc_val, exc_tb): + self.main_program.current_block_idx = self.old_block_idx + if exc_type is not None: + return False # re-raise exception + return True + + def layer_wrapper(*args, **kwargs): + with BlockGuard(block_idx): + return layer(*args, **kwargs) + + return layer_wrapper + + +def position_encoding_init(n_position, d_pos_vec): + """ + Generate the initial values for the sinusoid position encoding table. + """ + channels = d_pos_vec + position = np.arange(n_position) + num_timescales = channels // 2 + log_timescale_increment = (np.log(float(1e4) / float(1)) / + (num_timescales - 1)) + inv_timescales = np.exp(np.arange( + num_timescales)) * -log_timescale_increment + scaled_time = np.expand_dims(position, 1) * np.expand_dims(inv_timescales, + 0) + signal = np.concatenate([np.sin(scaled_time), np.cos(scaled_time)], axis=1) + signal = np.pad(signal, [[0, 0], [0, np.mod(channels, 2)]], 'constant') + position_enc = signal + return position_enc.astype("float32") + + +def multi_head_attention(queries, + keys, + values, + attn_bias, + d_key, + d_value, + d_model, + n_head=1, + dropout_rate=0., + cache=None, + gather_idx=None, + static_kv=False): + """ + Multi-Head Attention. Note that attn_bias is added to the logit before + computing softmax activiation to mask certain selected positions so that + they will not considered in attention weights. + """ + keys = queries if keys is None else keys + values = keys if values is None else values + + if not (len(queries.shape) == len(keys.shape) == len(values.shape) == 3): + raise ValueError( + "Inputs: quries, keys and values should all be 3-D tensors.") + + def __compute_qkv(queries, keys, values, n_head, d_key, d_value): + """ + Add linear projection to queries, keys, and values. + """ + q = layers.fc(input=queries, + size=d_key * n_head, + bias_attr=False, + num_flatten_dims=2) + # For encoder-decoder attention in inference, insert the ops and vars + # into global block to use as cache among beam search. + fc_layer = wrap_layer_with_block( + layers.fc, fluid.default_main_program().current_block() + .parent_idx) if cache is not None and static_kv else layers.fc + k = fc_layer( + input=keys, + size=d_key * n_head, + bias_attr=False, + num_flatten_dims=2) + v = fc_layer( + input=values, + size=d_value * n_head, + bias_attr=False, + num_flatten_dims=2) + return q, k, v + + def __split_heads_qkv(queries, keys, values, n_head, d_key, d_value): + """ + Reshape input tensors at the last dimension to split multi-heads + and then transpose. Specifically, transform the input tensor with shape + [bs, max_sequence_length, n_head * hidden_dim] to the output tensor + with shape [bs, n_head, max_sequence_length, hidden_dim]. + """ + # The value 0 in shape attr means copying the corresponding dimension + # size of the input as the output dimension size. + reshaped_q = layers.reshape( + x=queries, shape=[0, 0, n_head, d_key], inplace=True) + # permuate the dimensions into: + # [batch_size, n_head, max_sequence_len, hidden_size_per_head] + q = layers.transpose(x=reshaped_q, perm=[0, 2, 1, 3]) + # For encoder-decoder attention in inference, insert the ops and vars + # into global block to use as cache among beam search. + reshape_layer = wrap_layer_with_block( + layers.reshape, + fluid.default_main_program().current_block() + .parent_idx) if cache is not None and static_kv else layers.reshape + transpose_layer = wrap_layer_with_block( + layers.transpose, + fluid.default_main_program().current_block(). + parent_idx) if cache is not None and static_kv else layers.transpose + reshaped_k = reshape_layer( + x=keys, shape=[0, 0, n_head, d_key], inplace=True) + k = transpose_layer(x=reshaped_k, perm=[0, 2, 1, 3]) + reshaped_v = reshape_layer( + x=values, shape=[0, 0, n_head, d_value], inplace=True) + v = transpose_layer(x=reshaped_v, perm=[0, 2, 1, 3]) + + if cache is not None: # only for faster inference + if static_kv: # For encoder-decoder attention in inference + cache_k, cache_v = cache["static_k"], cache["static_v"] + # To init the static_k and static_v in cache. + # Maybe we can use condition_op(if_else) to do these at the first + # step in while loop to replace these, however it might be less + # efficient. + static_cache_init = wrap_layer_with_block( + layers.assign, + fluid.default_main_program().current_block().parent_idx) + static_cache_init(k, cache_k) + static_cache_init(v, cache_v) + else: # For decoder self-attention in inference + cache_k, cache_v = cache["k"], cache["v"] + # gather cell states corresponding to selected parent + select_k = layers.gather(cache_k, index=gather_idx) + select_v = layers.gather(cache_v, index=gather_idx) + if not static_kv: + # For self attention in inference, use cache and concat time steps. + select_k = layers.concat([select_k, k], axis=2) + select_v = layers.concat([select_v, v], axis=2) + # update cell states(caches) cached in global block + layers.assign(select_k, cache_k) + layers.assign(select_v, cache_v) + return q, select_k, select_v + return q, k, v + + def __combine_heads(x): + """ + Transpose and then reshape the last two dimensions of inpunt tensor x + so that it becomes one dimension, which is reverse to __split_heads. + """ + if len(x.shape) != 4: + raise ValueError("Input(x) should be a 4-D Tensor.") + + trans_x = layers.transpose(x, perm=[0, 2, 1, 3]) + # The value 0 in shape attr means copying the corresponding dimension + # size of the input as the output dimension size. + return layers.reshape( + x=trans_x, + shape=[0, 0, trans_x.shape[2] * trans_x.shape[3]], + inplace=True) + + def scaled_dot_product_attention(q, k, v, attn_bias, d_key, dropout_rate): + """ + Scaled Dot-Product Attention + """ + # print(q) + # print(k) + + product = layers.matmul(x=q, y=k, transpose_y=True, alpha=d_key**-0.5) + if attn_bias: + product += attn_bias + weights = layers.softmax(product) + if dropout_rate: + weights = layers.dropout( + weights, + dropout_prob=dropout_rate, + seed=dropout_seed, + is_test=False) + out = layers.matmul(weights, v) + return out + + q, k, v = __compute_qkv(queries, keys, values, n_head, d_key, d_value) + q, k, v = __split_heads_qkv(q, k, v, n_head, d_key, d_value) + + ctx_multiheads = scaled_dot_product_attention(q, k, v, attn_bias, d_model, + dropout_rate) + + out = __combine_heads(ctx_multiheads) + + # Project back to the model size. + proj_out = layers.fc(input=out, + size=d_model, + bias_attr=False, + num_flatten_dims=2) + return proj_out + + +def positionwise_feed_forward(x, d_inner_hid, d_hid, dropout_rate): + """ + Position-wise Feed-Forward Networks. + This module consists of two linear transformations with a ReLU activation + in between, which is applied to each position separately and identically. + """ + hidden = layers.fc(input=x, + size=d_inner_hid, + num_flatten_dims=2, + act="relu") + if dropout_rate: + hidden = layers.dropout( + hidden, dropout_prob=dropout_rate, seed=dropout_seed, is_test=False) + out = layers.fc(input=hidden, size=d_hid, num_flatten_dims=2) + return out + + +def pre_post_process_layer(prev_out, out, process_cmd, dropout_rate=0.): + """ + Add residual connection, layer normalization and droput to the out tensor + optionally according to the value of process_cmd. + This will be used before or after multi-head attention and position-wise + feed-forward networks. + """ + for cmd in process_cmd: + if cmd == "a": # add residual connection + out = out + prev_out if prev_out else out + elif cmd == "n": # add layer normalization + out = layers.layer_norm( + out, + begin_norm_axis=len(out.shape) - 1, + param_attr=fluid.initializer.Constant(1.), + bias_attr=fluid.initializer.Constant(0.)) + elif cmd == "d": # add dropout + if dropout_rate: + out = layers.dropout( + out, + dropout_prob=dropout_rate, + seed=dropout_seed, + is_test=False) + return out + + +pre_process_layer = partial(pre_post_process_layer, None) +post_process_layer = pre_post_process_layer + + +def prepare_encoder( + src_word, #[b,t,c] + src_pos, + src_vocab_size, + src_emb_dim, + src_max_len, + dropout_rate=0., + bos_idx=0, + word_emb_param_name=None, + pos_enc_param_name=None): + """Add word embeddings and position encodings. + The output tensor has a shape of: + [batch_size, max_src_length_in_batch, d_model]. + This module is used at the bottom of the encoder stacks. + """ + + src_word_emb = src_word #layers.concat(res,axis=1) + src_word_emb = layers.cast(src_word_emb, 'float32') + # print("src_word_emb",src_word_emb) + + src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5) + src_pos_enc = layers.embedding( + src_pos, + size=[src_max_len, src_emb_dim], + param_attr=fluid.ParamAttr( + name=pos_enc_param_name, trainable=False)) + src_pos_enc.stop_gradient = True + enc_input = src_word_emb + src_pos_enc + return layers.dropout( + enc_input, dropout_prob=dropout_rate, seed=dropout_seed, + is_test=False) if dropout_rate else enc_input + + +def prepare_decoder(src_word, + src_pos, + src_vocab_size, + src_emb_dim, + src_max_len, + dropout_rate=0., + bos_idx=0, + word_emb_param_name=None, + pos_enc_param_name=None): + """Add word embeddings and position encodings. + The output tensor has a shape of: + [batch_size, max_src_length_in_batch, d_model]. + This module is used at the bottom of the encoder stacks. + """ + src_word_emb = layers.embedding( + src_word, + size=[src_vocab_size, src_emb_dim], + padding_idx=bos_idx, # set embedding of bos to 0 + param_attr=fluid.ParamAttr( + name=word_emb_param_name, + initializer=fluid.initializer.Normal(0., src_emb_dim**-0.5))) + # print("target_word_emb",src_word_emb) + src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5) + src_pos_enc = layers.embedding( + src_pos, + size=[src_max_len, src_emb_dim], + param_attr=fluid.ParamAttr( + name=pos_enc_param_name, trainable=False)) + src_pos_enc.stop_gradient = True + enc_input = src_word_emb + src_pos_enc + return layers.dropout( + enc_input, dropout_prob=dropout_rate, seed=dropout_seed, + is_test=False) if dropout_rate else enc_input + + +# prepare_encoder = partial( +# prepare_encoder_decoder, pos_enc_param_name=pos_enc_param_names[0]) +# prepare_decoder = partial( +# prepare_encoder_decoder, pos_enc_param_name=pos_enc_param_names[1]) + + +def encoder_layer(enc_input, + attn_bias, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd="n", + postprocess_cmd="da"): + """The encoder layers that can be stacked to form a deep encoder. + This module consits of a multi-head (self) attention followed by + position-wise feed-forward networks and both the two components companied + with the post_process_layer to add residual connection, layer normalization + and droput. + """ + attn_output = multi_head_attention( + pre_process_layer(enc_input, preprocess_cmd, + prepostprocess_dropout), None, None, attn_bias, d_key, + d_value, d_model, n_head, attention_dropout) + attn_output = post_process_layer(enc_input, attn_output, postprocess_cmd, + prepostprocess_dropout) + ffd_output = positionwise_feed_forward( + pre_process_layer(attn_output, preprocess_cmd, prepostprocess_dropout), + d_inner_hid, d_model, relu_dropout) + return post_process_layer(attn_output, ffd_output, postprocess_cmd, + prepostprocess_dropout) + + +def encoder(enc_input, + attn_bias, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd="n", + postprocess_cmd="da"): + """ + The encoder is composed of a stack of identical layers returned by calling + encoder_layer. + """ + for i in range(n_layer): + enc_output = encoder_layer( + enc_input, + attn_bias, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, ) + enc_input = enc_output + enc_output = pre_process_layer(enc_output, preprocess_cmd, + prepostprocess_dropout) + return enc_output + + +def decoder_layer(dec_input, + enc_output, + slf_attn_bias, + dec_enc_attn_bias, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + cache=None, + gather_idx=None): + """ The layer to be stacked in decoder part. + The structure of this module is similar to that in the encoder part except + a multi-head attention is added to implement encoder-decoder attention. + """ + slf_attn_output = multi_head_attention( + pre_process_layer(dec_input, preprocess_cmd, prepostprocess_dropout), + None, + None, + slf_attn_bias, + d_key, + d_value, + d_model, + n_head, + attention_dropout, + cache=cache, + gather_idx=gather_idx) + slf_attn_output = post_process_layer( + dec_input, + slf_attn_output, + postprocess_cmd, + prepostprocess_dropout, ) + enc_attn_output = multi_head_attention( + pre_process_layer(slf_attn_output, preprocess_cmd, + prepostprocess_dropout), + enc_output, + enc_output, + dec_enc_attn_bias, + d_key, + d_value, + d_model, + n_head, + attention_dropout, + cache=cache, + gather_idx=gather_idx, + static_kv=True) + enc_attn_output = post_process_layer( + slf_attn_output, + enc_attn_output, + postprocess_cmd, + prepostprocess_dropout, ) + ffd_output = positionwise_feed_forward( + pre_process_layer(enc_attn_output, preprocess_cmd, + prepostprocess_dropout), + d_inner_hid, + d_model, + relu_dropout, ) + dec_output = post_process_layer( + enc_attn_output, + ffd_output, + postprocess_cmd, + prepostprocess_dropout, ) + return dec_output + + +def decoder(dec_input, + enc_output, + dec_slf_attn_bias, + dec_enc_attn_bias, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + caches=None, + gather_idx=None): + """ + The decoder is composed of a stack of identical decoder_layer layers. + """ + for i in range(n_layer): + dec_output = decoder_layer( + dec_input, + enc_output, + dec_slf_attn_bias, + dec_enc_attn_bias, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + cache=None if caches is None else caches[i], + gather_idx=gather_idx) + dec_input = dec_output + dec_output = pre_process_layer(dec_output, preprocess_cmd, + prepostprocess_dropout) + return dec_output + + +def make_all_inputs(input_fields): + """ + Define the input data layers for the transformer model. + """ + inputs = [] + for input_field in input_fields: + input_var = layers.data( + name=input_field, + shape=input_descs[input_field][0], + dtype=input_descs[input_field][1], + lod_level=input_descs[input_field][2] + if len(input_descs[input_field]) == 3 else 0, + append_batch_size=False) + inputs.append(input_var) + return inputs + + +def make_all_py_reader_inputs(input_fields, is_test=False): + reader = layers.py_reader( + capacity=20, + name="test_reader" if is_test else "train_reader", + shapes=[input_descs[input_field][0] for input_field in input_fields], + dtypes=[input_descs[input_field][1] for input_field in input_fields], + lod_levels=[ + input_descs[input_field][2] + if len(input_descs[input_field]) == 3 else 0 + for input_field in input_fields + ]) + return layers.read_file(reader), reader + + +def transformer(src_vocab_size, + trg_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + label_smooth_eps, + bos_idx=0, + use_py_reader=False, + is_test=False): + if weight_sharing: + assert src_vocab_size == trg_vocab_size, ( + "Vocabularies in source and target should be same for weight sharing." + ) + + data_input_names = encoder_data_input_fields + \ + decoder_data_input_fields[:-1] + label_data_input_fields + + if use_py_reader: + all_inputs, reader = make_all_py_reader_inputs(data_input_names, + is_test) + else: + all_inputs = make_all_inputs(data_input_names) + # print("all inputs",all_inputs) + enc_inputs_len = len(encoder_data_input_fields) + dec_inputs_len = len(decoder_data_input_fields[:-1]) + enc_inputs = all_inputs[0:enc_inputs_len] + dec_inputs = all_inputs[enc_inputs_len:enc_inputs_len + dec_inputs_len] + label = all_inputs[-2] + weights = all_inputs[-1] + + enc_output = wrap_encoder( + src_vocab_size, 64, n_layer, n_head, d_key, d_value, d_model, + d_inner_hid, prepostprocess_dropout, attention_dropout, relu_dropout, + preprocess_cmd, postprocess_cmd, weight_sharing, enc_inputs) + + predict = wrap_decoder( + trg_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + dec_inputs, + enc_output, ) + + # Padding index do not contribute to the total loss. The weights is used to + # cancel padding index in calculating the loss. + if label_smooth_eps: + label = layers.label_smooth( + label=layers.one_hot( + input=label, depth=trg_vocab_size), + epsilon=label_smooth_eps) + + cost = layers.softmax_with_cross_entropy( + logits=predict, + label=label, + soft_label=True if label_smooth_eps else False) + weighted_cost = cost * weights + sum_cost = layers.reduce_sum(weighted_cost) + token_num = layers.reduce_sum(weights) + token_num.stop_gradient = True + avg_cost = sum_cost / token_num + return sum_cost, avg_cost, predict, token_num, reader if use_py_reader else None + + +def wrap_encoder_forFeature(src_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + enc_inputs=None, + bos_idx=0): + """ + The wrapper assembles together all needed layers for the encoder. + img, src_pos, src_slf_attn_bias = enc_inputs + img + """ + + if enc_inputs is None: + # This is used to implement independent encoder program in inference. + conv_features, src_pos, src_slf_attn_bias = make_all_inputs( + encoder_data_input_fields) + else: + conv_features, src_pos, src_slf_attn_bias = enc_inputs # + b, t, c = conv_features.shape + #""" + # insert cnn + #""" + #import basemodel + # feat = basemodel.resnet_50(img) + + # mycrnn = basemodel.CRNN() + # feat = mycrnn.ocr_convs(img,use_cudnn=TrainTaskConfig.use_gpu) + # b, c, w, h = feat.shape + # src_word = layers.reshape(feat, shape=[-1, c, w * h]) + + #myconv8 = basemodel.conv8() + #feat = myconv8.net(img ) + #b , c, h, w = feat.shape#h=6 + #print(feat) + #layers.Print(feat,message="conv_feat",summarize=10) + + #feat =layers.conv2d(feat,c,filter_size =[4 , 1],act="relu") + #feat = layers.pool2d(feat,pool_stride=(3,1),pool_size=(3,1)) + #src_word = layers.squeeze(feat,axes=[2]) #src_word [-1,c,ww] + + #feat = layers.transpose(feat, [0,3,1,2]) + #src_word = layers.reshape(feat,[-1,w, c*h]) + #src_word = layers.im2sequence( + # input=feat, + # stride=[1, 1], + # filter_size=[feat.shape[2], 1]) + #layers.Print(src_word,message="src_word",summarize=10) + + # print('feat',feat) + #print("src_word",src_word) + + enc_input = prepare_encoder( + conv_features, + src_pos, + src_vocab_size, + d_model, + max_length, + prepostprocess_dropout, + bos_idx=bos_idx, + word_emb_param_name="src_word_emb_table") + + enc_output = encoder( + enc_input, + src_slf_attn_bias, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, ) + return enc_output + + +def wrap_encoder(src_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + enc_inputs=None, + bos_idx=0): + """ + The wrapper assembles together all needed layers for the encoder. + img, src_pos, src_slf_attn_bias = enc_inputs + img + """ + if enc_inputs is None: + # This is used to implement independent encoder program in inference. + src_word, src_pos, src_slf_attn_bias = make_all_inputs( + encoder_data_input_fields) + else: + src_word, src_pos, src_slf_attn_bias = enc_inputs # + #""" + # insert cnn + #""" + #import basemodel + # feat = basemodel.resnet_50(img) + + # mycrnn = basemodel.CRNN() + # feat = mycrnn.ocr_convs(img,use_cudnn=TrainTaskConfig.use_gpu) + # b, c, w, h = feat.shape + # src_word = layers.reshape(feat, shape=[-1, c, w * h]) + + #myconv8 = basemodel.conv8() + #feat = myconv8.net(img ) + #b , c, h, w = feat.shape#h=6 + #print(feat) + #layers.Print(feat,message="conv_feat",summarize=10) + + #feat =layers.conv2d(feat,c,filter_size =[4 , 1],act="relu") + #feat = layers.pool2d(feat,pool_stride=(3,1),pool_size=(3,1)) + #src_word = layers.squeeze(feat,axes=[2]) #src_word [-1,c,ww] + + #feat = layers.transpose(feat, [0,3,1,2]) + #src_word = layers.reshape(feat,[-1,w, c*h]) + #src_word = layers.im2sequence( + # input=feat, + # stride=[1, 1], + # filter_size=[feat.shape[2], 1]) + #layers.Print(src_word,message="src_word",summarize=10) + + # print('feat',feat) + #print("src_word",src_word) + enc_input = prepare_decoder( + src_word, + src_pos, + src_vocab_size, + d_model, + max_length, + prepostprocess_dropout, + bos_idx=bos_idx, + word_emb_param_name="src_word_emb_table") + + enc_output = encoder( + enc_input, + src_slf_attn_bias, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, ) + return enc_output + + +def wrap_decoder(trg_vocab_size, + max_length, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + dec_inputs=None, + enc_output=None, + caches=None, + gather_idx=None, + bos_idx=0): + """ + The wrapper assembles together all needed layers for the decoder. + """ + if dec_inputs is None: + # This is used to implement independent decoder program in inference. + trg_word, trg_pos, trg_slf_attn_bias, trg_src_attn_bias, enc_output = \ + make_all_inputs(decoder_data_input_fields) + else: + trg_word, trg_pos, trg_slf_attn_bias, trg_src_attn_bias = dec_inputs + + dec_input = prepare_decoder( + trg_word, + trg_pos, + trg_vocab_size, + d_model, + max_length, + prepostprocess_dropout, + bos_idx=bos_idx, + word_emb_param_name="src_word_emb_table" + if weight_sharing else "trg_word_emb_table") + dec_output = decoder( + dec_input, + enc_output, + trg_slf_attn_bias, + trg_src_attn_bias, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + caches=caches, + gather_idx=gather_idx) + return dec_output + # Reshape to 2D tensor to use GEMM instead of BatchedGEMM + dec_output = layers.reshape( + dec_output, shape=[-1, dec_output.shape[-1]], inplace=True) + if weight_sharing: + predict = layers.matmul( + x=dec_output, + y=fluid.default_main_program().global_block().var( + "trg_word_emb_table"), + transpose_y=True) + else: + predict = layers.fc(input=dec_output, + size=trg_vocab_size, + bias_attr=False) + if dec_inputs is None: + # Return probs for independent decoder program. + predict = layers.softmax(predict) + return predict + + +def fast_decode(src_vocab_size, + trg_vocab_size, + max_in_len, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + beam_size, + max_out_len, + bos_idx, + eos_idx, + use_py_reader=False): + """ + Use beam search to decode. Caches will be used to store states of history + steps which can make the decoding faster. + """ + data_input_names = encoder_data_input_fields + fast_decoder_data_input_fields + + if use_py_reader: + all_inputs, reader = make_all_py_reader_inputs(data_input_names) + else: + all_inputs = make_all_inputs(data_input_names) + + enc_inputs_len = len(encoder_data_input_fields) + dec_inputs_len = len(fast_decoder_data_input_fields) + enc_inputs = all_inputs[0:enc_inputs_len] #enc_inputs tensor + dec_inputs = all_inputs[enc_inputs_len:enc_inputs_len + + dec_inputs_len] #dec_inputs tensor + + enc_output = wrap_encoder( + src_vocab_size, + 64, ##to do !!!!!???? + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + enc_inputs, + bos_idx=bos_idx) + start_tokens, init_scores, parent_idx, trg_src_attn_bias = dec_inputs + + def beam_search(): + max_len = layers.fill_constant( + shape=[1], + dtype=start_tokens.dtype, + value=max_out_len, + force_cpu=True) + step_idx = layers.fill_constant( + shape=[1], dtype=start_tokens.dtype, value=0, force_cpu=True) + cond = layers.less_than(x=step_idx, y=max_len) # default force_cpu=True + while_op = layers.While(cond) + # array states will be stored for each step. + ids = layers.array_write( + layers.reshape(start_tokens, (-1, 1)), step_idx) + scores = layers.array_write(init_scores, step_idx) + # cell states will be overwrited at each step. + # caches contains states of history steps in decoder self-attention + # and static encoder output projections in encoder-decoder attention + # to reduce redundant computation. + caches = [ + { + "k": # for self attention + layers.fill_constant_batch_size_like( + input=start_tokens, + shape=[-1, n_head, 0, d_key], + dtype=enc_output.dtype, + value=0), + "v": # for self attention + layers.fill_constant_batch_size_like( + input=start_tokens, + shape=[-1, n_head, 0, d_value], + dtype=enc_output.dtype, + value=0), + "static_k": # for encoder-decoder attention + layers.create_tensor(dtype=enc_output.dtype), + "static_v": # for encoder-decoder attention + layers.create_tensor(dtype=enc_output.dtype) + } for i in range(n_layer) + ] + + with while_op.block(): + pre_ids = layers.array_read(array=ids, i=step_idx) + # Since beam_search_op dosen't enforce pre_ids' shape, we can do + # inplace reshape here which actually change the shape of pre_ids. + pre_ids = layers.reshape(pre_ids, (-1, 1, 1), inplace=True) + pre_scores = layers.array_read(array=scores, i=step_idx) + # gather cell states corresponding to selected parent + pre_src_attn_bias = layers.gather( + trg_src_attn_bias, index=parent_idx) + pre_pos = layers.elementwise_mul( + x=layers.fill_constant_batch_size_like( + input=pre_src_attn_bias, # cann't use lod tensor here + value=1, + shape=[-1, 1, 1], + dtype=pre_ids.dtype), + y=step_idx, + axis=0) + logits = wrap_decoder( + trg_vocab_size, + max_in_len, + n_layer, + n_head, + d_key, + d_value, + d_model, + d_inner_hid, + prepostprocess_dropout, + attention_dropout, + relu_dropout, + preprocess_cmd, + postprocess_cmd, + weight_sharing, + dec_inputs=(pre_ids, pre_pos, None, pre_src_attn_bias), + enc_output=enc_output, + caches=caches, + gather_idx=parent_idx, + bos_idx=bos_idx) + # intra-beam topK + topk_scores, topk_indices = layers.topk( + input=layers.softmax(logits), k=beam_size) + accu_scores = layers.elementwise_add( + x=layers.log(topk_scores), y=pre_scores, axis=0) + # beam_search op uses lod to differentiate branches. + accu_scores = layers.lod_reset(accu_scores, pre_ids) + # topK reduction across beams, also contain special handle of + # end beams and end sentences(batch reduction) + selected_ids, selected_scores, gather_idx = layers.beam_search( + pre_ids=pre_ids, + pre_scores=pre_scores, + ids=topk_indices, + scores=accu_scores, + beam_size=beam_size, + end_id=eos_idx, + return_parent_idx=True) + layers.increment(x=step_idx, value=1.0, in_place=True) + # cell states(caches) have been updated in wrap_decoder, + # only need to update beam search states here. + layers.array_write(selected_ids, i=step_idx, array=ids) + layers.array_write(selected_scores, i=step_idx, array=scores) + layers.assign(gather_idx, parent_idx) + layers.assign(pre_src_attn_bias, trg_src_attn_bias) + length_cond = layers.less_than(x=step_idx, y=max_len) + finish_cond = layers.logical_not(layers.is_empty(x=selected_ids)) + layers.logical_and(x=length_cond, y=finish_cond, out=cond) + + finished_ids, finished_scores = layers.beam_search_decode( + ids, scores, beam_size=beam_size, end_id=eos_idx) + return finished_ids, finished_scores + + finished_ids, finished_scores = beam_search() + return finished_ids, finished_scores, reader if use_py_reader else None diff --git a/ppocr/modeling/losses/det_sast_loss.py b/ppocr/modeling/losses/det_sast_loss.py new file mode 100644 index 0000000000000000000000000000000000000000..fb1a545af9d67b61878b3fb476fdc81746e8b992 --- /dev/null +++ b/ppocr/modeling/losses/det_sast_loss.py @@ -0,0 +1,115 @@ +#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import paddle.fluid as fluid + + +class SASTLoss(object): + """ + SAST Loss function + """ + + def __init__(self, params=None): + super(SASTLoss, self).__init__() + + def __call__(self, predicts, labels): + """ + tcl_pos: N x 128 x 3 + tcl_mask: N x 128 x 1 + tcl_label: N x X list or LoDTensor + """ + + f_score = predicts['f_score'] + f_border = predicts['f_border'] + f_tvo = predicts['f_tvo'] + f_tco = predicts['f_tco'] + + l_score = labels['input_score'] + l_border = labels['input_border'] + l_mask = labels['input_mask'] + l_tvo = labels['input_tvo'] + l_tco = labels['input_tco'] + + #score_loss + intersection = fluid.layers.reduce_sum(f_score * l_score * l_mask) + union = fluid.layers.reduce_sum(f_score * l_mask) + fluid.layers.reduce_sum(l_score * l_mask) + score_loss = 1.0 - 2 * intersection / (union + 1e-5) + + #border loss + l_border_split, l_border_norm = fluid.layers.split(l_border, num_or_sections=[4, 1], dim=1) + f_border_split = f_border + l_border_norm_split = fluid.layers.expand(x=l_border_norm, expand_times=[1, 4, 1, 1]) + l_border_score = fluid.layers.expand(x=l_score, expand_times=[1, 4, 1, 1]) + l_border_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 4, 1, 1]) + border_diff = l_border_split - f_border_split + abs_border_diff = fluid.layers.abs(border_diff) + border_sign = abs_border_diff < 1.0 + border_sign = fluid.layers.cast(border_sign, dtype='float32') + border_sign.stop_gradient = True + border_in_loss = 0.5 * abs_border_diff * abs_border_diff * border_sign + \ + (abs_border_diff - 0.5) * (1.0 - border_sign) + border_out_loss = l_border_norm_split * border_in_loss + border_loss = fluid.layers.reduce_sum(border_out_loss * l_border_score * l_border_mask) / \ + (fluid.layers.reduce_sum(l_border_score * l_border_mask) + 1e-5) + + #tvo_loss + l_tvo_split, l_tvo_norm = fluid.layers.split(l_tvo, num_or_sections=[8, 1], dim=1) + f_tvo_split = f_tvo + l_tvo_norm_split = fluid.layers.expand(x=l_tvo_norm, expand_times=[1, 8, 1, 1]) + l_tvo_score = fluid.layers.expand(x=l_score, expand_times=[1, 8, 1, 1]) + l_tvo_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 8, 1, 1]) + # + tvo_geo_diff = l_tvo_split - f_tvo_split + abs_tvo_geo_diff = fluid.layers.abs(tvo_geo_diff) + tvo_sign = abs_tvo_geo_diff < 1.0 + tvo_sign = fluid.layers.cast(tvo_sign, dtype='float32') + tvo_sign.stop_gradient = True + tvo_in_loss = 0.5 * abs_tvo_geo_diff * abs_tvo_geo_diff * tvo_sign + \ + (abs_tvo_geo_diff - 0.5) * (1.0 - tvo_sign) + tvo_out_loss = l_tvo_norm_split * tvo_in_loss + tvo_loss = fluid.layers.reduce_sum(tvo_out_loss * l_tvo_score * l_tvo_mask) / \ + (fluid.layers.reduce_sum(l_tvo_score * l_tvo_mask) + 1e-5) + + #tco_loss + l_tco_split, l_tco_norm = fluid.layers.split(l_tco, num_or_sections=[2, 1], dim=1) + f_tco_split = f_tco + l_tco_norm_split = fluid.layers.expand(x=l_tco_norm, expand_times=[1, 2, 1, 1]) + l_tco_score = fluid.layers.expand(x=l_score, expand_times=[1, 2, 1, 1]) + l_tco_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 2, 1, 1]) + # + tco_geo_diff = l_tco_split - f_tco_split + abs_tco_geo_diff = fluid.layers.abs(tco_geo_diff) + tco_sign = abs_tco_geo_diff < 1.0 + tco_sign = fluid.layers.cast(tco_sign, dtype='float32') + tco_sign.stop_gradient = True + tco_in_loss = 0.5 * abs_tco_geo_diff * abs_tco_geo_diff * tco_sign + \ + (abs_tco_geo_diff - 0.5) * (1.0 - tco_sign) + tco_out_loss = l_tco_norm_split * tco_in_loss + tco_loss = fluid.layers.reduce_sum(tco_out_loss * l_tco_score * l_tco_mask) / \ + (fluid.layers.reduce_sum(l_tco_score * l_tco_mask) + 1e-5) + + + # total loss + tvo_lw, tco_lw = 1.5, 1.5 + score_lw, border_lw = 1.0, 1.0 + total_loss = score_loss * score_lw + border_loss * border_lw + \ + tvo_loss * tvo_lw + tco_loss * tco_lw + + losses = {'total_loss':total_loss, "score_loss":score_loss,\ + "border_loss":border_loss, 'tvo_loss':tvo_loss, 'tco_loss':tco_loss} + return losses \ No newline at end of file diff --git a/ppocr/modeling/losses/rec_srn_loss.py b/ppocr/modeling/losses/rec_srn_loss.py new file mode 100755 index 0000000000000000000000000000000000000000..b1ebd86fadfc48831ca3d03e79ef65f4be2aa5e8 --- /dev/null +++ b/ppocr/modeling/losses/rec_srn_loss.py @@ -0,0 +1,55 @@ +#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve. +# +#Licensed under the Apache License, Version 2.0 (the "License"); +#you may not use this file except in compliance with the License. +#You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +#Unless required by applicable law or agreed to in writing, software +#distributed under the License is distributed on an "AS IS" BASIS, +#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +#See the License for the specific language governing permissions and +#limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import math + +import paddle +import paddle.fluid as fluid + + +class SRNLoss(object): + def __init__(self, params): + super(SRNLoss, self).__init__() + self.char_num = params['char_num'] + + def __call__(self, predicts, others): + predict = predicts['predict'] + word_predict = predicts['word_out'] + gsrm_predict = predicts['gsrm_out'] + label = others['label'] + lbl_weight = others['lbl_weight'] + + casted_label = fluid.layers.cast(x=label, dtype='int64') + cost_word = fluid.layers.cross_entropy( + input=word_predict, label=casted_label) + cost_gsrm = fluid.layers.cross_entropy( + input=gsrm_predict, label=casted_label) + cost_vsfd = fluid.layers.cross_entropy( + input=predict, label=casted_label) + + cost_word = fluid.layers.reshape( + x=fluid.layers.reduce_sum(cost_word), shape=[1]) + cost_gsrm = fluid.layers.reshape( + x=fluid.layers.reduce_sum(cost_gsrm), shape=[1]) + cost_vsfd = fluid.layers.reshape( + x=fluid.layers.reduce_sum(cost_vsfd), shape=[1]) + + sum_cost = fluid.layers.sum( + [cost_word, cost_vsfd * 2.0, cost_gsrm * 0.15]) + + return [sum_cost, cost_vsfd, cost_word] diff --git a/ppocr/optimizer.py b/ppocr/optimizer.py old mode 100755 new mode 100644 index eb1037c2053de30dacb82c889a02f023683d25d3..55f2eba14c4be738c0dbc686cd32afbcff62f874 --- a/ppocr/optimizer.py +++ b/ppocr/optimizer.py @@ -65,3 +65,44 @@ def AdamDecay(params, parameter_list=None): regularization=L2Decay(regularization_coeff=l2_decay), parameter_list=parameter_list) return optimizer + + +def RMSProp(params, parameter_list=None): + """ + define optimizer function + args: + params(dict): the super parameters + parameter_list (list): list of Variable names to update to minimize loss + return: + """ + base_lr = params.get("base_lr", 0.001) + l2_decay = params.get("l2_decay", 0.00005) + + if 'decay' in params: + supported_decay_mode = ["cosine_decay", "piecewise_decay"] + params = params['decay'] + decay_mode = params['function'] + assert decay_mode in supported_decay_mode, "Supported decay mode is {}, but got {}".format( + supported_decay_mode, decay_mode) + + if decay_mode == "cosine_decay": + step_each_epoch = params['step_each_epoch'] + total_epoch = params['total_epoch'] + base_lr = fluid.layers.cosine_decay( + learning_rate=base_lr, + step_each_epoch=step_each_epoch, + epochs=total_epoch) + elif decay_mode == "piecewise_decay": + boundaries = params["boundaries"] + decay_rate = params["decay_rate"] + values = [ + base_lr * decay_rate**idx + for idx in range(len(boundaries) + 1) + ] + base_lr = fluid.layers.piecewise_decay(boundaries, values) + + optimizer = fluid.optimizer.RMSProp( + learning_rate=base_lr, + regularization=fluid.regularizer.L2Decay(regularization_coeff=l2_decay)) + + return optimizer \ No newline at end of file diff --git a/ppocr/postprocess/sast_postprocess.py b/ppocr/postprocess/sast_postprocess.py new file mode 100644 index 0000000000000000000000000000000000000000..ddce65542c6d12244419a37db838e070401b5290 --- /dev/null +++ b/ppocr/postprocess/sast_postprocess.py @@ -0,0 +1,289 @@ +# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved. +# +# Licensed under the Apache License, Version 2.0 (the "License"); +# you may not use this file except in compliance with the License. +# You may obtain a copy of the License at +# +# http://www.apache.org/licenses/LICENSE-2.0 +# +# Unless required by applicable law or agreed to in writing, software +# distributed under the License is distributed on an "AS IS" BASIS, +# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. +# See the License for the specific language governing permissions and +# limitations under the License. + +from __future__ import absolute_import +from __future__ import division +from __future__ import print_function + +import os +import sys +__dir__ = os.path.dirname(__file__) +sys.path.append(__dir__) +sys.path.append(os.path.join(__dir__, '..')) + +import numpy as np +from .locality_aware_nms import nms_locality +# import lanms +import cv2 +import time + + +class SASTPostProcess(object): + """ + The post process for SAST. + """ + + def __init__(self, params): + self.score_thresh = params.get('score_thresh', 0.5) + self.nms_thresh = params.get('nms_thresh', 0.2) + self.sample_pts_num = params.get('sample_pts_num', 2) + self.shrink_ratio_of_width = params.get('shrink_ratio_of_width', 0.3) + self.expand_scale = params.get('expand_scale', 1.0) + self.tcl_map_thresh = 0.5 + + # c++ la-nms is faster, but only support python 3.5 + self.is_python35 = False + if sys.version_info.major == 3 and sys.version_info.minor == 5: + self.is_python35 = True + + def point_pair2poly(self, point_pair_list): + """ + Transfer vertical point_pairs into poly point in clockwise. + """ + # constract poly + point_num = len(point_pair_list) * 2 + point_list = [0] * point_num + for idx, point_pair in enumerate(point_pair_list): + point_list[idx] = point_pair[0] + point_list[point_num - 1 - idx] = point_pair[1] + return np.array(point_list).reshape(-1, 2) + + def shrink_quad_along_width(self, quad, begin_width_ratio=0., end_width_ratio=1.): + """ + Generate shrink_quad_along_width. + """ + ratio_pair = np.array([[begin_width_ratio], [end_width_ratio]], dtype=np.float32) + p0_1 = quad[0] + (quad[1] - quad[0]) * ratio_pair + p3_2 = quad[3] + (quad[2] - quad[3]) * ratio_pair + return np.array([p0_1[0], p0_1[1], p3_2[1], p3_2[0]]) + + def expand_poly_along_width(self, poly, shrink_ratio_of_width=0.3): + """ + expand poly along width. + """ + point_num = poly.shape[0] + left_quad = np.array([poly[0], poly[1], poly[-2], poly[-1]], dtype=np.float32) + left_ratio = -shrink_ratio_of_width * np.linalg.norm(left_quad[0] - left_quad[3]) / \ + (np.linalg.norm(left_quad[0] - left_quad[1]) + 1e-6) + left_quad_expand = self.shrink_quad_along_width(left_quad, left_ratio, 1.0) + right_quad = np.array([poly[point_num // 2 - 2], poly[point_num // 2 - 1], + poly[point_num // 2], poly[point_num // 2 + 1]], dtype=np.float32) + right_ratio = 1.0 + \ + shrink_ratio_of_width * np.linalg.norm(right_quad[0] - right_quad[3]) / \ + (np.linalg.norm(right_quad[0] - right_quad[1]) + 1e-6) + right_quad_expand = self.shrink_quad_along_width(right_quad, 0.0, right_ratio) + poly[0] = left_quad_expand[0] + poly[-1] = left_quad_expand[-1] + poly[point_num // 2 - 1] = right_quad_expand[1] + poly[point_num // 2] = right_quad_expand[2] + return poly + + def restore_quad(self, tcl_map, tcl_map_thresh, tvo_map): + """Restore quad.""" + xy_text = np.argwhere(tcl_map[:, :, 0] > tcl_map_thresh) + xy_text = xy_text[:, ::-1] # (n, 2) + + # Sort the text boxes via the y axis + xy_text = xy_text[np.argsort(xy_text[:, 1])] + + scores = tcl_map[xy_text[:, 1], xy_text[:, 0], 0] + scores = scores[:, np.newaxis] + + # Restore + point_num = int(tvo_map.shape[-1] / 2) + assert point_num == 4 + tvo_map = tvo_map[xy_text[:, 1], xy_text[:, 0], :] + xy_text_tile = np.tile(xy_text, (1, point_num)) # (n, point_num * 2) + quads = xy_text_tile - tvo_map + + return scores, quads, xy_text + + def quad_area(self, quad): + """ + compute area of a quad. + """ + edge = [ + (quad[1][0] - quad[0][0]) * (quad[1][1] + quad[0][1]), + (quad[2][0] - quad[1][0]) * (quad[2][1] + quad[1][1]), + (quad[3][0] - quad[2][0]) * (quad[3][1] + quad[2][1]), + (quad[0][0] - quad[3][0]) * (quad[0][1] + quad[3][1]) + ] + return np.sum(edge) / 2. + + def nms(self, dets): + if self.is_python35: + import lanms + dets = lanms.merge_quadrangle_n9(dets, self.nms_thresh) + else: + dets = nms_locality(dets, self.nms_thresh) + return dets + + def cluster_by_quads_tco(self, tcl_map, tcl_map_thresh, quads, tco_map): + """ + Cluster pixels in tcl_map based on quads. + """ + instance_count = quads.shape[0] + 1 # contain background + instance_label_map = np.zeros(tcl_map.shape[:2], dtype=np.int32) + if instance_count == 1: + return instance_count, instance_label_map + + # predict text center + xy_text = np.argwhere(tcl_map[:, :, 0] > tcl_map_thresh) + n = xy_text.shape[0] + xy_text = xy_text[:, ::-1] # (n, 2) + tco = tco_map[xy_text[:, 1], xy_text[:, 0], :] # (n, 2) + pred_tc = xy_text - tco + + # get gt text center + m = quads.shape[0] + gt_tc = np.mean(quads, axis=1) # (m, 2) + + pred_tc_tile = np.tile(pred_tc[:, np.newaxis, :], (1, m, 1)) # (n, m, 2) + gt_tc_tile = np.tile(gt_tc[np.newaxis, :, :], (n, 1, 1)) # (n, m, 2) + dist_mat = np.linalg.norm(pred_tc_tile - gt_tc_tile, axis=2) # (n, m) + xy_text_assign = np.argmin(dist_mat, axis=1) + 1 # (n,) + + instance_label_map[xy_text[:, 1], xy_text[:, 0]] = xy_text_assign + return instance_count, instance_label_map + + def estimate_sample_pts_num(self, quad, xy_text): + """ + Estimate sample points number. + """ + eh = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2])) / 2.0 + ew = (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3])) / 2.0 + + dense_sample_pts_num = max(2, int(ew)) + dense_xy_center_line = xy_text[np.linspace(0, xy_text.shape[0] - 1, dense_sample_pts_num, + endpoint=True, dtype=np.float32).astype(np.int32)] + + dense_xy_center_line_diff = dense_xy_center_line[1:] - dense_xy_center_line[:-1] + estimate_arc_len = np.sum(np.linalg.norm(dense_xy_center_line_diff, axis=1)) + + sample_pts_num = max(2, int(estimate_arc_len / eh)) + return sample_pts_num + + def detect_sast(self, tcl_map, tvo_map, tbo_map, tco_map, ratio_w, ratio_h, src_w, src_h, + shrink_ratio_of_width=0.3, tcl_map_thresh=0.5, offset_expand=1.0, out_strid=4.0): + """ + first resize the tcl_map, tvo_map and tbo_map to the input_size, then restore the polys + """ + # restore quad + scores, quads, xy_text = self.restore_quad(tcl_map, tcl_map_thresh, tvo_map) + dets = np.hstack((quads, scores)).astype(np.float32, copy=False) + dets = self.nms(dets) + if dets.shape[0] == 0: + return [] + quads = dets[:, :-1].reshape(-1, 4, 2) + + # Compute quad area + quad_areas = [] + for quad in quads: + quad_areas.append(-self.quad_area(quad)) + + # instance segmentation + # instance_count, instance_label_map = cv2.connectedComponents(tcl_map.astype(np.uint8), connectivity=8) + instance_count, instance_label_map = self.cluster_by_quads_tco(tcl_map, tcl_map_thresh, quads, tco_map) + + # restore single poly with tcl instance. + poly_list = [] + for instance_idx in range(1, instance_count): + xy_text = np.argwhere(instance_label_map == instance_idx)[:, ::-1] + quad = quads[instance_idx - 1] + q_area = quad_areas[instance_idx - 1] + if q_area < 5: + continue + + # + len1 = float(np.linalg.norm(quad[0] -quad[1])) + len2 = float(np.linalg.norm(quad[1] -quad[2])) + min_len = min(len1, len2) + if min_len < 3: + continue + + # filter small CC + if xy_text.shape[0] <= 0: + continue + + # filter low confidence instance + xy_text_scores = tcl_map[xy_text[:, 1], xy_text[:, 0], 0] + if np.sum(xy_text_scores) / quad_areas[instance_idx - 1] < 0.1: + # if np.sum(xy_text_scores) / quad_areas[instance_idx - 1] < 0.05: + continue + + # sort xy_text + left_center_pt = np.array([[(quad[0, 0] + quad[-1, 0]) / 2.0, + (quad[0, 1] + quad[-1, 1]) / 2.0]]) # (1, 2) + right_center_pt = np.array([[(quad[1, 0] + quad[2, 0]) / 2.0, + (quad[1, 1] + quad[2, 1]) / 2.0]]) # (1, 2) + proj_unit_vec = (right_center_pt - left_center_pt) / \ + (np.linalg.norm(right_center_pt - left_center_pt) + 1e-6) + proj_value = np.sum(xy_text * proj_unit_vec, axis=1) + xy_text = xy_text[np.argsort(proj_value)] + + # Sample pts in tcl map + if self.sample_pts_num == 0: + sample_pts_num = self.estimate_sample_pts_num(quad, xy_text) + else: + sample_pts_num = self.sample_pts_num + xy_center_line = xy_text[np.linspace(0, xy_text.shape[0] - 1, sample_pts_num, + endpoint=True, dtype=np.float32).astype(np.int32)] + + point_pair_list = [] + for x, y in xy_center_line: + # get corresponding offset + offset = tbo_map[y, x, :].reshape(2, 2) + if offset_expand != 1.0: + offset_length = np.linalg.norm(offset, axis=1, keepdims=True) + expand_length = np.clip(offset_length * (offset_expand - 1), a_min=0.5, a_max=3.0) + offset_detal = offset / offset_length * expand_length + offset = offset + offset_detal + # original point + ori_yx = np.array([y, x], dtype=np.float32) + point_pair = (ori_yx + offset)[:, ::-1]* out_strid / np.array([ratio_w, ratio_h]).reshape(-1, 2) + point_pair_list.append(point_pair) + + # ndarry: (x, 2), expand poly along width + detected_poly = self.point_pair2poly(point_pair_list) + detected_poly = self.expand_poly_along_width(detected_poly, shrink_ratio_of_width) + detected_poly[:, 0] = np.clip(detected_poly[:, 0], a_min=0, a_max=src_w) + detected_poly[:, 1] = np.clip(detected_poly[:, 1], a_min=0, a_max=src_h) + poly_list.append(detected_poly) + + return poly_list + + def __call__(self, outs_dict, ratio_list): + score_list = outs_dict['f_score'] + border_list = outs_dict['f_border'] + tvo_list = outs_dict['f_tvo'] + tco_list = outs_dict['f_tco'] + + img_num = len(ratio_list) + poly_lists = [] + for ino in range(img_num): + p_score = score_list[ino].transpose((1,2,0)) + p_border = border_list[ino].transpose((1,2,0)) + p_tvo = tvo_list[ino].transpose((1,2,0)) + p_tco = tco_list[ino].transpose((1,2,0)) + # print(p_score.shape, p_border.shape, p_tvo.shape, p_tco.shape) + ratio_h, ratio_w, src_h, src_w = ratio_list[ino] + + poly_list = self.detect_sast(p_score, p_tvo, p_border, p_tco, ratio_w, ratio_h, src_w, src_h, + shrink_ratio_of_width=self.shrink_ratio_of_width, + tcl_map_thresh=self.tcl_map_thresh, offset_expand=self.expand_scale) + + poly_lists.append(poly_list) + + return poly_lists + diff --git a/ppocr/utils/character.py b/ppocr/utils/character.py index 9a3db8dd92454c65256d1cadf7f155b6882ee171..c7c93fc557604a32d12343d929c119fd787ee126 100755 --- a/ppocr/utils/character.py +++ b/ppocr/utils/character.py @@ -25,6 +25,9 @@ class CharacterOps(object): def __init__(self, config): self.character_type = config['character_type'] self.loss_type = config['loss_type'] + self.max_text_len = config['max_text_length'] + if self.loss_type == "srn" and self.character_type != "en": + raise Exception("SRN can only support in character_type == en") if self.character_type == "en": self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz" dict_character = list(self.character_str) @@ -54,6 +57,8 @@ class CharacterOps(object): self.end_str = "eos" if self.loss_type == "attention": dict_character = [self.beg_str, self.end_str] + dict_character + elif self.loss_type == "srn": + dict_character = dict_character + [self.beg_str, self.end_str] self.dict = {} for i, char in enumerate(dict_character): self.dict[char] = i @@ -147,6 +152,39 @@ def cal_predicts_accuracy(char_ops, return acc, acc_num, img_num +def cal_predicts_accuracy_srn(char_ops, + preds, + labels, + max_text_len, + is_debug=False): + acc_num = 0 + img_num = 0 + + total_len = preds.shape[0] + img_num = int(total_len / max_text_len) + for i in range(img_num): + cur_label = [] + cur_pred = [] + for j in range(max_text_len): + if labels[j + i * max_text_len] != 37: #0 + cur_label.append(labels[j + i * max_text_len][0]) + else: + break + + for j in range(max_text_len + 1): + if j < len(cur_label) and preds[j + i * max_text_len][ + 0] != cur_label[j]: + break + elif j == len(cur_label) and j == max_text_len: + acc_num += 1 + break + elif j == len(cur_label) and preds[j + i * max_text_len][0] == 37: + acc_num += 1 + break + acc = acc_num * 1.0 / img_num + return acc, acc_num, img_num + + def convert_rec_attention_infer_res(preds): img_num = preds.shape[0] target_lod = [0] diff --git a/ppocr/utils/save_load.py b/ppocr/utils/save_load.py old mode 100755 new mode 100644 diff --git a/tools/eval_utils/eval_det_iou.py b/tools/eval_utils/eval_det_iou.py index 2f5ff2f10f1d5ecd1d33d2c742633945288ffe4c..64405984e3b90aa58f21324e727713075c5c4dc4 100644 --- a/tools/eval_utils/eval_det_iou.py +++ b/tools/eval_utils/eval_det_iou.py @@ -88,8 +88,8 @@ class DetectionIoUEvaluator(object): points = gt[n]['points'] # transcription = gt[n]['text'] dontCare = gt[n]['ignore'] - points = Polygon(points) - points = points.buffer(0) +# points = Polygon(points) +# points = points.buffer(0) if not Polygon(points).is_valid or not Polygon(points).is_simple: continue @@ -105,8 +105,8 @@ class DetectionIoUEvaluator(object): for n in range(len(pred)): points = pred[n]['points'] - points = Polygon(points) - points = points.buffer(0) +# points = Polygon(points) +# points = points.buffer(0) if not Polygon(points).is_valid or not Polygon(points).is_simple: continue diff --git a/tools/eval_utils/eval_rec_utils.py b/tools/eval_utils/eval_rec_utils.py index aebb9f903a4d84288da0d7b042ea39e1759456b5..4479d9dff2c352c54a26ac1bfdbddab497fff418 100644 --- a/tools/eval_utils/eval_rec_utils.py +++ b/tools/eval_utils/eval_rec_utils.py @@ -29,7 +29,7 @@ FORMAT = '%(asctime)s-%(levelname)s: %(message)s' logging.basicConfig(level=logging.INFO, format=FORMAT) logger = logging.getLogger(__name__) -from ppocr.utils.character import cal_predicts_accuracy +from ppocr.utils.character import cal_predicts_accuracy, cal_predicts_accuracy_srn from ppocr.utils.character import convert_rec_label_to_lod from ppocr.utils.character import convert_rec_attention_infer_res from ppocr.utils.utility import create_module @@ -60,22 +60,60 @@ def eval_rec_run(exe, config, eval_info_dict, mode): for ino in range(img_num): img_list.append(data[ino][0]) label_list.append(data[ino][1]) - img_list = np.concatenate(img_list, axis=0) - outs = exe.run(eval_info_dict['program'], \ + + if config['Global']['loss_type'] != "srn": + img_list = np.concatenate(img_list, axis=0) + outs = exe.run(eval_info_dict['program'], \ feed={'image': img_list}, \ fetch_list=eval_info_dict['fetch_varname_list'], \ return_numpy=False) - preds = np.array(outs[0]) - if preds.shape[1] != 1: - preds, preds_lod = convert_rec_attention_infer_res(preds) + preds = np.array(outs[0]) + + if config['Global']['loss_type'] == "attention": + preds, preds_lod = convert_rec_attention_infer_res(preds) + else: + preds_lod = outs[0].lod()[0] + labels, labels_lod = convert_rec_label_to_lod(label_list) + acc, acc_num, sample_num = cal_predicts_accuracy( + char_ops, preds, preds_lod, labels, labels_lod, + is_remove_duplicate) else: - preds_lod = outs[0].lod()[0] - labels, labels_lod = convert_rec_label_to_lod(label_list) - acc, acc_num, sample_num = cal_predicts_accuracy( - char_ops, preds, preds_lod, labels, labels_lod, is_remove_duplicate) + encoder_word_pos_list = [] + gsrm_word_pos_list = [] + gsrm_slf_attn_bias1_list = [] + gsrm_slf_attn_bias2_list = [] + for ino in range(img_num): + encoder_word_pos_list.append(data[ino][2]) + gsrm_word_pos_list.append(data[ino][3]) + gsrm_slf_attn_bias1_list.append(data[ino][4]) + gsrm_slf_attn_bias2_list.append(data[ino][5]) + + img_list = np.concatenate(img_list, axis=0) + label_list = np.concatenate(label_list, axis=0) + encoder_word_pos_list = np.concatenate( + encoder_word_pos_list, axis=0).astype(np.int64) + gsrm_word_pos_list = np.concatenate( + gsrm_word_pos_list, axis=0).astype(np.int64) + gsrm_slf_attn_bias1_list = np.concatenate( + gsrm_slf_attn_bias1_list, axis=0).astype(np.float32) + gsrm_slf_attn_bias2_list = np.concatenate( + gsrm_slf_attn_bias2_list, axis=0).astype(np.float32) + + labels = label_list + + outs = exe.run(eval_info_dict['program'], \ + feed={'image': img_list, 'encoder_word_pos': encoder_word_pos_list, + 'gsrm_word_pos': gsrm_word_pos_list, 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1_list, + 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2_list}, \ + fetch_list=eval_info_dict['fetch_varname_list'], \ + return_numpy=False) + preds = np.array(outs[0]) + acc, acc_num, sample_num = cal_predicts_accuracy_srn( + char_ops, preds, labels, config['Global']['max_text_length']) + total_acc_num += acc_num total_sample_num += sample_num - logger.info("eval batch id: {}, acc: {}".format(total_batch_num, acc)) + #logger.info("eval batch id: {}, acc: {}".format(total_batch_num, acc)) total_batch_num += 1 avg_acc = total_acc_num * 1.0 / total_sample_num metrics = {'avg_acc': avg_acc, "total_acc_num": total_acc_num, \ diff --git a/tools/infer/predict_rec.py b/tools/infer/predict_rec.py index b51c49fcb90657d766bddb3c39b4339e6a342a2f..c81b4eb2560ee5ad66a85c96efe4de935a2beee1 100755 --- a/tools/infer/predict_rec.py +++ b/tools/infer/predict_rec.py @@ -40,7 +40,8 @@ class TextRecognizer(object): char_ops_params = { "character_type": args.rec_char_type, "character_dict_path": args.rec_char_dict_path, - "use_space_char": args.use_space_char + "use_space_char": args.use_space_char, + "max_text_length": args.max_text_length } if self.rec_algorithm != "RARE": char_ops_params['loss_type'] = 'ctc' diff --git a/tools/infer/utility.py b/tools/infer/utility.py index fc91880e2fdd39cb05d5115f5c3c76fd766b714c..b0a0ec1f3037b35d22212cb6fb3555746edc2e99 100755 --- a/tools/infer/utility.py +++ b/tools/infer/utility.py @@ -59,6 +59,7 @@ def parse_args(): parser.add_argument("--rec_image_shape", type=str, default="3, 32, 320") parser.add_argument("--rec_char_type", type=str, default='ch') parser.add_argument("--rec_batch_num", type=int, default=30) + parser.add_argument("--max_text_length", type=int, default=25) parser.add_argument( "--rec_char_dict_path", type=str, diff --git a/tools/infer_rec.py b/tools/infer_rec.py index 8cde44d8ef91355c48f9081cf706e340ddb04820..21b503cc7f342885094a03ef1f1ed0f05698ac70 100755 --- a/tools/infer_rec.py +++ b/tools/infer_rec.py @@ -64,7 +64,6 @@ def main(): exe = fluid.Executor(place) rec_model = create_module(config['Architecture']['function'])(params=config) - startup_prog = fluid.Program() eval_prog = fluid.Program() with fluid.program_guard(eval_prog, startup_prog): @@ -86,10 +85,36 @@ def main(): for i in range(max_img_num): logger.info("infer_img:%s" % infer_list[i]) img = next(blobs) - predict = exe.run(program=eval_prog, - feed={"image": img}, - fetch_list=fetch_varname_list, - return_numpy=False) + if loss_type != "srn": + predict = exe.run(program=eval_prog, + feed={"image": img}, + fetch_list=fetch_varname_list, + return_numpy=False) + else: + encoder_word_pos_list = [] + gsrm_word_pos_list = [] + gsrm_slf_attn_bias1_list = [] + gsrm_slf_attn_bias2_list = [] + encoder_word_pos_list.append(img[1]) + gsrm_word_pos_list.append(img[2]) + gsrm_slf_attn_bias1_list.append(img[3]) + gsrm_slf_attn_bias2_list.append(img[4]) + + encoder_word_pos_list = np.concatenate( + encoder_word_pos_list, axis=0).astype(np.int64) + gsrm_word_pos_list = np.concatenate( + gsrm_word_pos_list, axis=0).astype(np.int64) + gsrm_slf_attn_bias1_list = np.concatenate( + gsrm_slf_attn_bias1_list, axis=0).astype(np.float32) + gsrm_slf_attn_bias2_list = np.concatenate( + gsrm_slf_attn_bias2_list, axis=0).astype(np.float32) + + predict = exe.run(program=eval_prog, \ + feed={'image': img[0], 'encoder_word_pos': encoder_word_pos_list, + 'gsrm_word_pos': gsrm_word_pos_list, 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1_list, + 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2_list}, \ + fetch_list=fetch_varname_list, \ + return_numpy=False) if loss_type == "ctc": preds = np.array(predict[0]) preds = preds.reshape(-1) @@ -114,7 +139,18 @@ def main(): score = np.mean(probs[0, 1:end_pos[1]]) preds = preds.reshape(-1) preds_text = char_ops.decode(preds) - + elif loss_type == "srn": + cur_pred = [] + preds = np.array(predict[0]) + preds = preds.reshape(-1) + probs = np.array(predict[1]) + ind = np.argmax(probs, axis=1) + valid_ind = np.where(preds != 37)[0] + if len(valid_ind) == 0: + continue + score = np.mean(probs[valid_ind, ind[valid_ind]]) + preds = preds[:valid_ind[-1] + 1] + preds_text = char_ops.decode(preds) logger.info("\t index: {}".format(preds)) logger.info("\t word : {}".format(preds_text)) logger.info("\t score: {}".format(score)) diff --git a/tools/program.py b/tools/program.py index 4ebc11670702ba627c89b060692c9827e6e163fd..6d8b9937bff7e70f018b069467525669e7001aae 100755 --- a/tools/program.py +++ b/tools/program.py @@ -32,7 +32,8 @@ from eval_utils.eval_det_utils import eval_det_run from eval_utils.eval_rec_utils import eval_rec_run from ppocr.utils.save_load import save_model import numpy as np -from ppocr.utils.character import cal_predicts_accuracy, CharacterOps +from ppocr.utils.character import cal_predicts_accuracy, cal_predicts_accuracy_srn, CharacterOps + class ArgsParser(ArgumentParser): def __init__(self): @@ -81,10 +82,8 @@ default_config = {'Global': {'debug': False, }} def load_config(file_path): """ Load config from yml/yaml file. - Args: file_path (str): Path of the config file to be loaded. - Returns: global config """ merge_config(default_config) @@ -103,10 +102,8 @@ def load_config(file_path): def merge_config(config): """ Merge config into global config. - Args: config (dict): Config to be merged. - Returns: global config """ for key, value in config.items(): @@ -157,13 +154,11 @@ def build(config, main_prog, startup_prog, mode): 3. create a model 4. create fetchs 5. create an optimizer - Args: config(dict): config main_prog(): main program startup_prog(): startup program is_train(bool): train or valid - Returns: dataloader(): a bridge between the model and the data fetchs(dict): dict of model outputs(included loss and measures) @@ -176,8 +171,16 @@ def build(config, main_prog, startup_prog, mode): fetch_name_list = list(outputs.keys()) fetch_varname_list = [outputs[v].name for v in fetch_name_list] opt_loss_name = None + model_average = None + img_loss_name = None + word_loss_name = None if mode == "train": opt_loss = outputs['total_loss'] + # srn loss + #img_loss = outputs['img_loss'] + #word_loss = outputs['word_loss'] + #img_loss_name = img_loss.name + #word_loss_name = word_loss.name opt_params = config['Optimizer'] optimizer = create_module(opt_params['function'])(opt_params) optimizer.minimize(opt_loss) @@ -185,7 +188,17 @@ def build(config, main_prog, startup_prog, mode): global_lr = optimizer._global_learning_rate() fetch_name_list.insert(0, "lr") fetch_varname_list.insert(0, global_lr.name) - return (dataloader, fetch_name_list, fetch_varname_list, opt_loss_name) + if "loss_type" in config["Global"]: + if config['Global']["loss_type"] == 'srn': + model_average = fluid.optimizer.ModelAverage( + config['Global']['average_window'], + min_average_window=config['Global'][ + 'min_average_window'], + max_average_window=config['Global'][ + 'max_average_window']) + + return (dataloader, fetch_name_list, fetch_varname_list, opt_loss_name, + model_average) def build_export(config, main_prog, startup_prog): @@ -329,14 +342,20 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict): lr = np.mean(np.array(train_outs[fetch_map['lr']])) preds_idx = fetch_map['decoded_out'] preds = np.array(train_outs[preds_idx]) - preds_lod = train_outs[preds_idx].lod()[0] labels_idx = fetch_map['label'] labels = np.array(train_outs[labels_idx]) - labels_lod = train_outs[labels_idx].lod()[0] - acc, acc_num, img_num = cal_predicts_accuracy( - config['Global']['char_ops'], preds, preds_lod, labels, - labels_lod) + if config['Global']['loss_type'] != 'srn': + preds_lod = train_outs[preds_idx].lod()[0] + labels_lod = train_outs[labels_idx].lod()[0] + + acc, acc_num, img_num = cal_predicts_accuracy( + config['Global']['char_ops'], preds, preds_lod, labels, + labels_lod) + else: + acc, acc_num, img_num = cal_predicts_accuracy_srn( + config['Global']['char_ops'], preds, labels, + config['Global']['max_text_length']) t2 = time.time() train_batch_elapse = t2 - t1 stats = {'loss': loss, 'acc': acc} @@ -350,6 +369,9 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict): if train_batch_id > 0 and\ train_batch_id % eval_batch_step == 0: + model_average = train_info_dict['model_average'] + if model_average != None: + model_average.apply(exe) metrics = eval_rec_run(exe, config, eval_info_dict, "eval") eval_acc = metrics['avg_acc'] eval_sample_num = metrics['total_sample_num'] @@ -375,6 +397,7 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict): save_model(train_info_dict['train_program'], save_path) return + def preprocess(): FLAGS = ArgsParser().parse_args() config = load_config(FLAGS.config) @@ -386,15 +409,15 @@ def preprocess(): check_gpu(use_gpu) alg = config['Global']['algorithm'] - assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE'] - if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE']: + assert alg in ['EAST', 'DB', 'SAST', 'Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN'] + if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN']: config['Global']['char_ops'] = CharacterOps(config['Global']) place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace() startup_program = fluid.Program() train_program = fluid.Program() - if alg in ['EAST', 'DB']: + if alg in ['EAST', 'DB', 'SAST']: train_alg_type = 'det' else: train_alg_type = 'rec' diff --git a/tools/train.py b/tools/train.py index 68e792b7331a9d47ca6744ea1a9f362979d75542..2ea9d0e011d38f228106409f843c6ed41f10b844 100755 --- a/tools/train.py +++ b/tools/train.py @@ -52,6 +52,7 @@ def main(): train_fetch_name_list = train_build_outputs[1] train_fetch_varname_list = train_build_outputs[2] train_opt_loss_name = train_build_outputs[3] + model_average = train_build_outputs[-1] eval_program = fluid.Program() eval_build_outputs = program.build( @@ -85,7 +86,8 @@ def main(): 'train_program':train_program,\ 'reader':train_loader,\ 'fetch_name_list':train_fetch_name_list,\ - 'fetch_varname_list':train_fetch_varname_list} + 'fetch_varname_list':train_fetch_varname_list,\ + 'model_average': model_average} eval_info_dict = {'program':eval_program,\ 'reader':eval_reader,\