diff --git a/README.md b/README.md
index 4814fbb9e857c9bcbb1513c5622500454477b942..08a27d8e7e2dd0a1ddcc774b0dd19189fcfb248b 100644
--- a/README.md
+++ b/README.md
@@ -4,12 +4,11 @@ English | [简体中文](README_cn.md)
PaddleOCR aims to create rich, leading, and practical OCR tools that help users train better models and apply them into practice.
**Recent updates**
+- 2020.8.16, Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
- 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite)
-- 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight OCR model are provided.
+- 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addition, the benchmarks of the ultra-lightweight OCR model are provided.
- 2020.7.15, Add several related datasets, data annotation and synthesis tools.
-- 2020.7.9 Add a new model to support recognize the character "space".
-- 2020.7.9 Add the data augument and learning rate decay strategies during training.
- [more](./doc/doc_en/update_en.md)
## Features
@@ -18,7 +17,7 @@ PaddleOCR aims to create rich, leading, and practical OCR tools that help users
- Detection model DB (4.1M) + recognition model CRNN (4.5M)
- Various text detection algorithms: EAST, DB
- Various text recognition algorithms: Rosetta, CRNN, STAR-Net, RARE
-- Support Linux, Windows, MacOS and other systems.
+- Support Linux, Windows, macOS and other systems.
## Visualization
@@ -30,9 +29,9 @@ PaddleOCR aims to create rich, leading, and practical OCR tools that help users
You can also quickly experience the ultra-lightweight OCR : [Online Experience](https://www.paddlepaddle.org.cn/hub/scene/ocr)
-Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in the website to obtain the QR code for installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
+Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Android systems): [Sign in to the website to obtain the QR code for installing the App](https://ai.baidu.com/easyedge/app/openSource?from=paddlelite)
- Also, you can scan the QR code blow to install the App (**Android support only**)
+ Also, you can scan the QR code below to install the App (**Android support only**)
@@ -79,7 +78,7 @@ Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Andr
- Visualization
- [Ultra-lightweight Chinese/English OCR Visualization](#UCOCRVIS)
- [General Chinese/English OCR Visualization](#GeOCRVIS)
- - [Chinese/English OCR Visualization (Support Space Recognization )](#SpaceOCRVIS)
+ - [Chinese/English OCR Visualization (Support Space Recognition )](#SpaceOCRVIS)
- [Community](#Community)
- [References](./doc/doc_en/reference_en.md)
- [License](#LICENSE)
@@ -91,7 +90,7 @@ Mobile DEMO experience (based on EasyEdge and Paddle-Lite, supports iOS and Andr
PaddleOCR open source text detection algorithms list:
- [x] EAST([paper](https://arxiv.org/abs/1704.03155))
- [x] DB([paper](https://arxiv.org/abs/1911.08947))
-- [ ] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research, comming soon)
+- [x] SAST([paper](https://arxiv.org/abs/1908.05498))(Baidu Self-Research)
On the ICDAR2015 dataset, the text detection result is as follows:
@@ -101,8 +100,17 @@ On the ICDAR2015 dataset, the text detection result is as follows:
|EAST|MobileNetV3|81.67%|79.83%|80.74%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_east.tar)|
|DB|ResNet50_vd|83.79%|80.65%|82.19%|[Download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)|
|DB|MobileNetV3|75.92%|73.18%|74.53%|[Download link](https://paddleocr.bj.bcebos.com/det_mv3_db.tar)|
+|SAST|ResNet50_vd|92.18%|82.96%|87.33%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)|
-For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
+On Total-Text dataset, the text detection result is as follows:
+
+|Model|Backbone|precision|recall|Hmean|Download link|
+|-|-|-|-|-|-|
+|SAST|ResNet50_vd|88.74%|79.80%|84.03%|[Download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)|
+
+**Note:** Additional data, like icdar2013, icdar2017, COCO-Text, ArT, was added to the model training of SAST. Download English public dataset in organized format used by PaddleOCR from [Baidu Drive](https://pan.baidu.com/s/12cPnZcVuV1zn5DOd4mqjVw) (download code: 2bpi).
+
+For use of [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) street view dataset with a total of 3w training data,the related configuration and pre-trained models for text detection task are as follows:
|Model|Backbone|Configuration file|Pre-trained model|
|-|-|-|-|
|ultra-lightweight OCR model|MobileNetV3|det_mv3_db.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar)|
@@ -120,7 +128,7 @@ PaddleOCR open-source text recognition algorithms list:
- [x] Rosetta([paper](https://arxiv.org/abs/1910.05085))
- [x] STAR-Net([paper](http://www.bmva.org/bmvc/2016/papers/paper043/index.html))
- [x] RARE([paper](https://arxiv.org/abs/1603.03915v1))
-- [ ] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research, comming soon)
+- [x] SRN([paper](https://arxiv.org/abs/2003.12294))(Baidu Self-Research)
Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation result of these above text recognition (using MJSynth and SynthText for training, evaluate on IIIT, SVT, IC03, IC13, IC15, SVTP, CUTE) is as follow:
@@ -134,8 +142,14 @@ Refer to [DTRB](https://arxiv.org/abs/1904.01906), the training and evaluation r
|STAR-Net|MobileNetV3|81.56%|rec_mv3_tps_bilstm_ctc|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_ctc.tar)|
|RARE|Resnet34_vd|84.90%|rec_r34_vd_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_r34_vd_tps_bilstm_attn.tar)|
|RARE|MobileNetV3|83.32%|rec_mv3_tps_bilstm_attn|[Download link](https://paddleocr.bj.bcebos.com/rec_mv3_tps_bilstm_attn.tar)|
+|SRN|Resnet50_vd_fpn|88.33%|rec_r50fpn_vd_none_srn|[Download link](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar)|
+
+**Note:** SRN model uses data expansion method to expand the two training sets mentioned above, and the expanded data can be downloaded from [Baidu Drive](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA) (download code: y3ry).
+
+The average accuracy of the two-stage training in the original paper is 89.74%, and that of one stage training in paddleocr is 88.33%. Both pre-trained weights can be downloaded [here](https://paddleocr.bj.bcebos.com/SRN/rec_r50fpn_vd_none_srn.tar).
+
+We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w training data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
-We use [LSVT](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/doc/doc_en/datasets_en.md#1-icdar2019-lsvt) dataset and cropout 30w traning data from original photos by using position groundtruth and make some calibration needed. In addition, based on the LSVT corpus, 500w synthetic data is generated to train the model. The related configuration and pre-trained models are as follows:
|Model|Backbone|Configuration file|Pre-trained model|
|-|-|-|-|
|ultra-lightweight OCR model|MobileNetV3|rec_chinese_lite_train.yml|[Download link](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn.tar)|[inference model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance_infer.tar) & [pre-trained model](https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_enhance.tar)|
@@ -145,7 +159,7 @@ Please refer to the document for training guide and use of PaddleOCR text recogn
## END-TO-END OCR Algorithm
-- [ ] [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, comming soon)
+- [ ] [End2End-PSL](https://arxiv.org/abs/1909.07808)(Baidu Self-Research, coming soon)
## Visualization
@@ -207,9 +221,10 @@ This project is released under
+
-
@@ -26,7 +26,7 @@
android:label="Settings">
diff --git a/deploy/android_demo/app/src/main/cpp/native.cpp b/deploy/android_demo/app/src/main/cpp/native.cpp
index 33233e5372e307a892786c6bea779691e1f6781a..390c594deb02a8f82693f2c83741a4750fe7cb25 100644
--- a/deploy/android_demo/app/src/main/cpp/native.cpp
+++ b/deploy/android_demo/app/src/main/cpp/native.cpp
@@ -30,7 +30,7 @@ Java_com_baidu_paddle_lite_demo_ocr_OCRPredictorNative_init(JNIEnv *env, jobject
}
/**
- * "LITE_POWER_HIGH" 转为 paddle::lite_api::LITE_POWER_HIGH
+ * "LITE_POWER_HIGH" convert to paddle::lite_api::LITE_POWER_HIGH
* @param cpu_mode
* @return
*/
diff --git a/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.cpp b/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.cpp
index 6548157b7ecac09ca3802ee6e226d555bfcd9099..3d0147715519c195fd48f7f84b7a28a5a82f5363 100644
--- a/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.cpp
+++ b/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.cpp
@@ -37,7 +37,7 @@ int OCR_PPredictor::init_from_file(const std::string &det_model_path, const std:
return RETURN_OK;
}
/**
- * 调试用,保存第一步的框选结果
+ * for debug use, show result of First Step
* @param filter_boxes
* @param boxes
* @param srcimg
diff --git a/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.h b/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.h
index 9adbf1e35214d4c83230ecf89b650aa1c1125a8f..eb2bc3bc989c5dd9a2c5a8aae3508ca733602bd7 100644
--- a/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.h
+++ b/deploy/android_demo/app/src/main/cpp/ocr_ppredictor.h
@@ -12,26 +12,26 @@
namespace ppredictor {
/**
- * 配置
+ * Config
*/
struct OCR_Config {
- int thread_num = 4; // 线程数
+ int thread_num = 4; // Thread num
paddle::lite_api::PowerMode mode = paddle::lite_api::LITE_POWER_HIGH; // PaddleLite Mode
};
/**
- * 一个四边形内图片的推理结果,
+ * PolyGone Result
*/
struct OCRPredictResult {
- std::vector word_index; //
+ std::vector word_index;
std::vector> points;
float score;
};
/**
- * OCR 一共有2个模型进行推理,
- * 1. 使用第一个模型(det),框选出多个四边形
- * 2. 从原图从抠出这些多边形,使用第二个模型(rec),获取文本
+ * OCR there are 2 models
+ * 1. First model(det),select polygones to show where are the texts
+ * 2. crop from the origin images, use these polygones to infer
*/
class OCR_PPredictor : public PPredictor_Interface {
public:
@@ -50,7 +50,7 @@ public:
int init(const std::string &det_model_content, const std::string &rec_model_content);
int init_from_file(const std::string &det_model_path, const std::string &rec_model_path);
/**
- * 返回OCR结果
+ * Return OCR result
* @param dims
* @param input_data
* @param input_len
@@ -69,7 +69,7 @@ public:
private:
/**
- * 从第一个模型的结果中计算有文字的四边形
+ * calcul Polygone from the result image of first model
* @param pred
* @param output_height
* @param output_width
@@ -81,7 +81,7 @@ private:
const cv::Mat &origin);
/**
- * 第二个模型的推理
+ * infer for second model
*
* @param boxes
* @param origin
@@ -91,14 +91,14 @@ private:
infer_rec(const std::vector>> &boxes, const cv::Mat &origin);
/**
- * 第二个模型提取文字的后处理
+ * Postprocess or sencod model to extract text
* @param res
* @return
*/
std::vector postprocess_rec_word_index(const PredictorOutput &res);
/**
- * 计算第二个模型的文字的置信度
+ * calculate confidence of second model text result
* @param res
* @return
*/
diff --git a/deploy/android_demo/app/src/main/cpp/ppredictor.h b/deploy/android_demo/app/src/main/cpp/ppredictor.h
index 9cdf3a88170ca2fbae9a2b1d8353fc99ebdfb971..1391109f9197b5e53796c940857c9d01b30a1125 100644
--- a/deploy/android_demo/app/src/main/cpp/ppredictor.h
+++ b/deploy/android_demo/app/src/main/cpp/ppredictor.h
@@ -7,7 +7,7 @@
namespace ppredictor {
/**
- * PaddleLite Preditor 通用接口
+ * PaddleLite Preditor Common Interface
*/
class PPredictor_Interface {
public:
@@ -21,7 +21,7 @@ public:
};
/**
- * 通用推理
+ * Common Predictor
*/
class PPredictor : public PPredictor_Interface {
public:
@@ -33,9 +33,9 @@ public:
}
/**
- * 初始化paddlitelite的opt模型,nb格式,与init_paddle二选一
+ * init paddlitelite opt model,nb format ,or use ini_paddle
* @param model_content
- * @return 0 目前是固定值0, 之后其他值表示失败
+ * @return 0
*/
virtual int init_nb(const std::string &model_content);
diff --git a/deploy/android_demo/app/src/main/cpp/predictor_output.h b/deploy/android_demo/app/src/main/cpp/predictor_output.h
index c56e2d9a4e9890faae89d6d183b81773a9c9a228..ec7086c62f0d5ca555ec17b38b27b6eea824fdb5 100644
--- a/deploy/android_demo/app/src/main/cpp/predictor_output.h
+++ b/deploy/android_demo/app/src/main/cpp/predictor_output.h
@@ -21,10 +21,10 @@ public:
const std::vector> get_lod() const;
const std::vector get_shape() const;
- std::vector data; // 通常是float返回,与下面的data_int二选一
- std::vector data_int; // 少数层是int返回,与 data二选一
- std::vector shape; // PaddleLite输出层的shape
- std::vector> lod; // PaddleLite输出层的lod
+ std::vector data; // return float, or use data_int
+ std::vector data_int; // several layers return int ,or use data
+ std::vector shape; // PaddleLite output shape
+ std::vector> lod; // PaddleLite output lod
private:
std::unique_ptr _tensor;
diff --git a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/AppCompatPreferenceActivity.java b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/AppCompatPreferenceActivity.java
index 397e4e39fe35a541fd534634ac509b94dd4b2b86..49af0afea425561d65d435a8fe67e96e98912680 100644
--- a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/AppCompatPreferenceActivity.java
+++ b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/AppCompatPreferenceActivity.java
@@ -19,15 +19,16 @@ package com.baidu.paddle.lite.demo.ocr;
import android.content.res.Configuration;
import android.os.Bundle;
import android.preference.PreferenceActivity;
-import android.support.annotation.LayoutRes;
-import android.support.annotation.Nullable;
-import android.support.v7.app.ActionBar;
-import android.support.v7.app.AppCompatDelegate;
-import android.support.v7.widget.Toolbar;
import android.view.MenuInflater;
import android.view.View;
import android.view.ViewGroup;
+import androidx.annotation.LayoutRes;
+import androidx.annotation.Nullable;
+import androidx.appcompat.app.ActionBar;
+import androidx.appcompat.app.AppCompatDelegate;
+import androidx.appcompat.widget.Toolbar;
+
/**
* A {@link PreferenceActivity} which implements and proxies the necessary calls
* to be used with AppCompat.
diff --git a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MainActivity.java b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MainActivity.java
index 716e7bddbc63d7787d79913f0ba8faf795a4bfe6..afb261dcf2afb2a2eaebf58c8c1f30f89200c902 100644
--- a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MainActivity.java
+++ b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MainActivity.java
@@ -19,11 +19,6 @@ import android.os.HandlerThread;
import android.os.Message;
import android.preference.PreferenceManager;
import android.provider.MediaStore;
-import android.support.annotation.NonNull;
-import android.support.v4.app.ActivityCompat;
-import android.support.v4.content.ContextCompat;
-import android.support.v4.content.FileProvider;
-import android.support.v7.app.AppCompatActivity;
import android.text.method.ScrollingMovementMethod;
import android.util.Log;
import android.view.Menu;
@@ -33,6 +28,12 @@ import android.widget.ImageView;
import android.widget.TextView;
import android.widget.Toast;
+import androidx.annotation.NonNull;
+import androidx.appcompat.app.AppCompatActivity;
+import androidx.core.app.ActivityCompat;
+import androidx.core.content.ContextCompat;
+import androidx.core.content.FileProvider;
+
import java.io.File;
import java.io.IOException;
import java.io.InputStream;
diff --git a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MiniActivity.java b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MiniActivity.java
new file mode 100644
index 0000000000000000000000000000000000000000..d5608911db2e043657eb01b9e8e92fe9b79c99b7
--- /dev/null
+++ b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/MiniActivity.java
@@ -0,0 +1,157 @@
+package com.baidu.paddle.lite.demo.ocr;
+
+import android.graphics.Bitmap;
+import android.graphics.BitmapFactory;
+import android.os.Build;
+import android.os.Bundle;
+import android.os.Handler;
+import android.os.HandlerThread;
+import android.os.Message;
+import android.util.Log;
+import android.view.View;
+import android.widget.Button;
+import android.widget.ImageView;
+import android.widget.TextView;
+import android.widget.Toast;
+
+import androidx.appcompat.app.AppCompatActivity;
+
+import java.io.IOException;
+import java.io.InputStream;
+
+public class MiniActivity extends AppCompatActivity {
+
+
+ public static final int REQUEST_LOAD_MODEL = 0;
+ public static final int REQUEST_RUN_MODEL = 1;
+ public static final int REQUEST_UNLOAD_MODEL = 2;
+ public static final int RESPONSE_LOAD_MODEL_SUCCESSED = 0;
+ public static final int RESPONSE_LOAD_MODEL_FAILED = 1;
+ public static final int RESPONSE_RUN_MODEL_SUCCESSED = 2;
+ public static final int RESPONSE_RUN_MODEL_FAILED = 3;
+
+ private static final String TAG = "MiniActivity";
+
+ protected Handler receiver = null; // Receive messages from worker thread
+ protected Handler sender = null; // Send command to worker thread
+ protected HandlerThread worker = null; // Worker thread to load&run model
+ protected volatile Predictor predictor = null;
+
+ private String assetModelDirPath = "models/ocr_v1_for_cpu";
+ private String assetlabelFilePath = "labels/ppocr_keys_v1.txt";
+
+ private Button button;
+ private ImageView imageView; // image result
+ private TextView textView; // text result
+
+ @Override
+ protected void onCreate(Bundle savedInstanceState) {
+ super.onCreate(savedInstanceState);
+ setContentView(R.layout.activity_mini);
+
+ Log.i(TAG, "SHOW in Logcat");
+
+ // Prepare the worker thread for mode loading and inference
+ worker = new HandlerThread("Predictor Worker");
+ worker.start();
+ sender = new Handler(worker.getLooper()) {
+ public void handleMessage(Message msg) {
+ switch (msg.what) {
+ case REQUEST_LOAD_MODEL:
+ // Load model and reload test image
+ if (!onLoadModel()) {
+ runOnUiThread(new Runnable() {
+ @Override
+ public void run() {
+ Toast.makeText(MiniActivity.this, "Load model failed!", Toast.LENGTH_SHORT).show();
+ }
+ });
+ }
+ break;
+ case REQUEST_RUN_MODEL:
+ // Run model if model is loaded
+ final boolean isSuccessed = onRunModel();
+ runOnUiThread(new Runnable() {
+ @Override
+ public void run() {
+ if (isSuccessed){
+ onRunModelSuccessed();
+ }else{
+ Toast.makeText(MiniActivity.this, "Run model failed!", Toast.LENGTH_SHORT).show();
+ }
+ }
+ });
+ break;
+ }
+ }
+ };
+ sender.sendEmptyMessage(REQUEST_LOAD_MODEL); // corresponding to REQUEST_LOAD_MODEL, to call onLoadModel()
+
+ imageView = findViewById(R.id.imageView);
+ textView = findViewById(R.id.sample_text);
+ button = findViewById(R.id.button);
+ button.setOnClickListener(new View.OnClickListener() {
+ @Override
+ public void onClick(View v) {
+ sender.sendEmptyMessage(REQUEST_RUN_MODEL);
+ }
+ });
+
+
+ }
+
+ @Override
+ protected void onDestroy() {
+ onUnloadModel();
+ if (Build.VERSION.SDK_INT >= Build.VERSION_CODES.JELLY_BEAN_MR2) {
+ worker.quitSafely();
+ } else {
+ worker.quit();
+ }
+ super.onDestroy();
+ }
+
+ /**
+ * call in onCreate, model init
+ *
+ * @return
+ */
+ private boolean onLoadModel() {
+ if (predictor == null) {
+ predictor = new Predictor();
+ }
+ return predictor.init(this, assetModelDirPath, assetlabelFilePath);
+ }
+
+ /**
+ * init engine
+ * call in onCreate
+ *
+ * @return
+ */
+ private boolean onRunModel() {
+ try {
+ String assetImagePath = "images/5.jpg";
+ InputStream imageStream = getAssets().open(assetImagePath);
+ Bitmap image = BitmapFactory.decodeStream(imageStream);
+ // Input is Bitmap
+ predictor.setInputImage(image);
+ return predictor.isLoaded() && predictor.runModel();
+ } catch (IOException e) {
+ e.printStackTrace();
+ return false;
+ }
+ }
+
+ private void onRunModelSuccessed() {
+ Log.i(TAG, "onRunModelSuccessed");
+ textView.setText(predictor.outputResult);
+ imageView.setImageBitmap(predictor.outputImage);
+ }
+
+ private void onUnloadModel() {
+ if (predictor != null) {
+ predictor.releaseModel();
+ }
+ }
+}
diff --git a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/Predictor.java b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/Predictor.java
index 7543aceea0331138d1b6c7ab7b33e6d0b4a223ca..078bba286cc9cd5f9904e0594b5608c755a2b131 100644
--- a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/Predictor.java
+++ b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/Predictor.java
@@ -38,7 +38,7 @@ public class Predictor {
protected float scoreThreshold = 0.1f;
protected Bitmap inputImage = null;
protected Bitmap outputImage = null;
- protected String outputResult = "";
+ protected volatile String outputResult = "";
protected float preprocessTime = 0;
protected float postprocessTime = 0;
@@ -46,6 +46,16 @@ public class Predictor {
public Predictor() {
}
+ public boolean init(Context appCtx, String modelPath, String labelPath) {
+ isLoaded = loadModel(appCtx, modelPath, cpuThreadNum, cpuPowerMode);
+ if (!isLoaded) {
+ return false;
+ }
+ isLoaded = loadLabel(appCtx, labelPath);
+ return isLoaded;
+ }
+
+
public boolean init(Context appCtx, String modelPath, String labelPath, int cpuThreadNum, String cpuPowerMode,
String inputColorFormat,
long[] inputShape, float[] inputMean,
@@ -76,11 +86,7 @@ public class Predictor {
Log.e(TAG, "Only BGR color format is supported.");
return false;
}
- isLoaded = loadModel(appCtx, modelPath, cpuThreadNum, cpuPowerMode);
- if (!isLoaded) {
- return false;
- }
- isLoaded = loadLabel(appCtx, labelPath);
+ boolean isLoaded = init(appCtx, modelPath, labelPath);
if (!isLoaded) {
return false;
}
@@ -222,7 +228,7 @@ public class Predictor {
for (int i = 0; i < warmupIterNum; i++) {
paddlePredictor.runImage(inputData, width, height, channels, inputImage);
}
- warmupIterNum = 0; // 之后不要再warm了
+ warmupIterNum = 0; // do not need warm
// Run inference
start = new Date();
ArrayList results = paddlePredictor.runImage(inputData, width, height, channels, inputImage);
@@ -317,7 +323,7 @@ public class Predictor {
for (Point p : result.getPoints()) {
sb.append("(").append(p.x).append(",").append(p.y).append(") ");
}
- Log.i(TAG, sb.toString());
+ Log.i(TAG, sb.toString()); // show LOG in Logcat panel
outputResultSb.append(i + 1).append(": ").append(result.getLabel()).append("\n");
}
outputResult = outputResultSb.toString();
diff --git a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/SettingsActivity.java b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/SettingsActivity.java
index 28727e60a3e5c124318e1d19e84fe7e4693a8eae..b3653ccc2ebaf2c4005a3b30c4c8162dad2f31fc 100644
--- a/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/SettingsActivity.java
+++ b/deploy/android_demo/app/src/main/java/com/baidu/paddle/lite/demo/ocr/SettingsActivity.java
@@ -5,7 +5,8 @@ import android.os.Bundle;
import android.preference.CheckBoxPreference;
import android.preference.EditTextPreference;
import android.preference.ListPreference;
-import android.support.v7.app.ActionBar;
+
+import androidx.appcompat.app.ActionBar;
import java.util.ArrayList;
import java.util.List;
diff --git a/deploy/android_demo/app/src/main/res/layout/activity_main.xml b/deploy/android_demo/app/src/main/res/layout/activity_main.xml
index 98b9bc1a875dc3a006496d9f87510f0f5d2f204c..adc70e200f737a5eaced2f3015ca675dc7a085db 100644
--- a/deploy/android_demo/app/src/main/res/layout/activity_main.xml
+++ b/deploy/android_demo/app/src/main/res/layout/activity_main.xml
@@ -1,5 +1,5 @@
-
-
\ No newline at end of file
+
\ No newline at end of file
diff --git a/deploy/android_demo/app/src/main/res/layout/activity_mini.xml b/deploy/android_demo/app/src/main/res/layout/activity_mini.xml
new file mode 100644
index 0000000000000000000000000000000000000000..ec4622ae5c21334769a0ef7f084f73a3ac6a05ab
--- /dev/null
+++ b/deploy/android_demo/app/src/main/res/layout/activity_mini.xml
@@ -0,0 +1,46 @@
+
+
+
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties b/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties
index 578b5482ad45045124272fa3e54d065a77c2eea2..63dac4e990eb3ca985e9c1018c84f26fbab0ac78 100644
--- a/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties
+++ b/deploy/android_demo/gradle/wrapper/gradle-wrapper.properties
@@ -1,4 +1,4 @@
-#Thu Aug 22 15:05:37 CST 2019
+#Wed Jul 22 23:48:44 CST 2020
distributionBase=GRADLE_USER_HOME
distributionPath=wrapper/dists
zipStoreBase=GRADLE_USER_HOME
diff --git a/deploy/cpp_infer/include/config.h b/deploy/cpp_infer/include/config.h
index 2adefb73675b3607b3dd993f48517e51e9a3d941..c5257d8ade72bfc7a68c5ca5c2c78fd5b6c1983c 100644
--- a/deploy/cpp_infer/include/config.h
+++ b/deploy/cpp_infer/include/config.h
@@ -47,7 +47,7 @@ public:
this->det_db_box_thresh = stod(config_map_["det_db_box_thresh"]);
- this->det_db_box_thresh = stod(config_map_["det_db_box_thresh"]);
+ this->det_db_unclip_ratio = stod(config_map_["det_db_unclip_ratio"]);
this->det_model_dir.assign(config_map_["det_model_dir"]);
diff --git a/deploy/hubserving/ocr_rec/params.py b/deploy/hubserving/ocr_rec/params.py
index fe93fc0870a1bb1d050285ca858de4cddb6b3a61..a6b2ee1902d2073f94202a0e9268e7bd821dfa21 100644
--- a/deploy/hubserving/ocr_rec/params.py
+++ b/deploy/hubserving/ocr_rec/params.py
@@ -33,6 +33,8 @@ def read_params():
cfg.rec_image_shape = "3, 32, 320"
cfg.rec_char_type = 'ch'
cfg.rec_batch_num = 30
+ cfg.max_text_length = 25
+
cfg.rec_char_dict_path = "./ppocr/utils/ppocr_keys_v1.txt"
cfg.use_space_char = True
diff --git a/deploy/hubserving/ocr_system/params.py b/deploy/hubserving/ocr_system/params.py
index 5b3bb1ea44b6cd262283797807c2c77646202fe8..6ece2d6fcfc703125ab5d1b3fa4566c43937d583 100644
--- a/deploy/hubserving/ocr_system/params.py
+++ b/deploy/hubserving/ocr_system/params.py
@@ -33,6 +33,8 @@ def read_params():
cfg.rec_image_shape = "3, 32, 320"
cfg.rec_char_type = 'ch'
cfg.rec_batch_num = 30
+ cfg.max_text_length = 25
+
cfg.rec_char_dict_path = "./ppocr/utils/ppocr_keys_v1.txt"
cfg.use_space_char = True
diff --git a/deploy/ios_demo/download_dependencies.sh b/deploy/ios_demo/download_dependencies.sh
index 40a8b3d259c83614c9bd210815fbe2f3bf49a607..1d09dac9cced12d63e44e4d6ea4870b542f7516b 100755
--- a/deploy/ios_demo/download_dependencies.sh
+++ b/deploy/ios_demo/download_dependencies.sh
@@ -26,7 +26,7 @@ download_and_extract() {
}
echo -e "[Download ios ocr demo denpendancy]\n"
-download_and_extract "${OCR_MODEL_URL}" "./ios-demo/ocr_demo/models"
-download_and_extract "${PADDLE_LITE_LIB_URL}" "./ios-demo/ocr_demo"
-download_and_extract "${OPENCV3_FRAMEWORK_URL}" "./ios-demo/ocr_demo"
+download_and_extract "${OCR_MODEL_URL}" "./ocr_demo/models"
+download_and_extract "${PADDLE_LITE_LIB_URL}" "./ocr_demo"
+download_and_extract "${OPENCV3_FRAMEWORK_URL}" "./ocr_demo"
echo -e "[done]\n"
diff --git a/deploy/lite/readme_en.md b/deploy/lite/readme_en.md
index 00bb5e587de59e7d7001a365a52606323c61541e..bf2f4b2212bf4d362fe2127da66127ab05766668 100644
--- a/deploy/lite/readme_en.md
+++ b/deploy/lite/readme_en.md
@@ -13,7 +13,7 @@ deployment solutions for end-side deployment issues.
- Computer (for Compiling Paddle Lite)
- Mobile phone (arm7 or arm8)
-## 2. Build ncnn library
+## 2. Build PaddleLite library
[build for Docker](https://paddle-lite.readthedocs.io/zh/latest/user_guides/source_compile.html#docker)
[build for Linux](https://paddle-lite.readthedocs.io/zh/latest/user_guides/source_compile.html#android)
[build for MAC OS](https://paddle-lite.readthedocs.io/zh/latest/user_guides/source_compile.html#id13)
diff --git a/deploy/pdserving/det_local_server.py b/deploy/pdserving/det_local_server.py
index acfccdb6d24a7e1ba490705dd147f21dbf921d31..eb7948daadd018810997bba78367e86aa3398e31 100644
--- a/deploy/pdserving/det_local_server.py
+++ b/deploy/pdserving/det_local_server.py
@@ -21,7 +21,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
import time
import re
import base64
@@ -64,8 +67,13 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_det_model")
-ocr_service.set_gpus("0")
-ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
ocr_service.init_det()
-ocr_service.run_debugger_service()
+if sys.argv[1] == 'gpu':
+ ocr_service.set_gpus("0")
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+ ocr_service.run_debugger_service(gpu=True)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292)
+ ocr_service.run_debugger_service()
+ocr_service.init_det()
ocr_service.run_web_service()
diff --git a/deploy/pdserving/det_web_server.py b/deploy/pdserving/det_web_server.py
index dd69be0c70eb0f4dd627aa47ad33045a204f78c0..14be74130dcb413c31a3e76c150d74f65575f451 100644
--- a/deploy/pdserving/det_web_server.py
+++ b/deploy/pdserving/det_web_server.py
@@ -21,7 +21,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
import time
import re
import base64
@@ -65,8 +68,11 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_det_model")
-ocr_service.set_gpus("0")
-ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+if sys.argv[1] == 'gpu':
+ ocr_service.set_gpus("0")
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="cpu")
ocr_service.init_det()
ocr_service.run_rpc_service()
ocr_service.run_web_service()
diff --git a/deploy/pdserving/ocr_local_server.py b/deploy/pdserving/ocr_local_server.py
index 93e2d7a3d1dc64451774ecf790c2ebd3b39f1d91..de5b3d13f12afd4a84c5d46625682c42f418d6bb 100644
--- a/deploy/pdserving/ocr_local_server.py
+++ b/deploy/pdserving/ocr_local_server.py
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes, GetRotateCropImage, SortedBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
from paddle_serving_app.local_predict import Debugger
import time
import re
@@ -37,8 +40,12 @@ class OCRService(WebService):
(2, 0, 1))
])
self.det_client = Debugger()
- self.det_client.load_model_config(
- det_model_config, gpu=True, profile=False)
+ if sys.argv[1] == 'gpu':
+ self.det_client.load_model_config(
+ det_model_config, gpu=True, profile=False)
+ elif sys.argv[1] == 'cpu':
+ self.det_client.load_model_config(
+ det_model_config, gpu=False, profile=False)
self.ocr_reader = OCRReader()
def preprocess(self, feed=[], fetch=[]):
@@ -97,7 +104,11 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_rec_model")
-ocr_service.prepare_server(workdir="workdir", port=9292)
ocr_service.init_det_debugger(det_model_config="ocr_det_model")
-ocr_service.run_debugger_service(gpu=True)
+if sys.argv[1] == 'gpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+ ocr_service.run_debugger_service(gpu=True)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="cpu")
+ ocr_service.run_debugger_service()
ocr_service.run_web_service()
diff --git a/deploy/pdserving/ocr_web_server.py b/deploy/pdserving/ocr_web_server.py
index d017f6b9b560dc82158641b9f3a9f80137b40716..6c0de44661958a6425f57039261969551ff552c5 100644
--- a/deploy/pdserving/ocr_web_server.py
+++ b/deploy/pdserving/ocr_web_server.py
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes, GetRotateCropImage, SortedBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
import time
import re
import base64
@@ -90,8 +93,11 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_rec_model")
-ocr_service.set_gpus("0")
-ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+if sys.argv[1] == 'gpu':
+ ocr_service.set_gpus("0")
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292)
ocr_service.init_det_client(
det_port=9293,
det_client_config="ocr_det_client/serving_client_conf.prototxt")
diff --git a/deploy/pdserving/readme.md b/deploy/pdserving/readme.md
index d6069cd35d201ee7418324c5d9b0b8b28c6569bd..f9ad80b896be0be29e3a7bb17e4aa119af81d5c4 100644
--- a/deploy/pdserving/readme.md
+++ b/deploy/pdserving/readme.md
@@ -16,20 +16,33 @@
**Python3操作指南:**
```
-#以下提供beta版本的paddle serving whl包,欢迎试用,正式版会在7月底正式上线
+#以下提供beta版本的paddle serving whl包,欢迎试用,正式版会在8月中正式上线
+#GPU用户下载server包使用这个链接
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_server_gpu-0.3.2-py3-none-any.whl
+python -m pip install paddle_serving_server_gpu-0.3.2-py3-none-any.whl
+#CPU版本使用这个链接
+wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_server-0.3.2-py3-none-any.whl
+python -m pip install paddle_serving_server-0.3.2-py3-none-any.whl
+#客户端和App包使用以下链接(CPU,GPU通用)
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_client-0.3.2-cp36-none-any.whl
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_app-0.1.2-py3-none-any.whl
-python -m pip install paddle_serving_app-0.1.2-py3-none-any.whl paddle_serving_server_gpu-0.3.2-py3-none-any.whl paddle_serving_client-0.3.2-cp36-none-any.whl
+python -m pip install paddle_serving_app-0.1.2-py3-none-any.whl paddle_serving_client-0.3.2-cp36-none-any.whl
```
**Python2操作指南:**
```
-#以下提供beta版本的paddle serving whl包,欢迎试用,正式版会在7月底正式上线
+#以下提供beta版本的paddle serving whl包,欢迎试用,正式版会在8月中正式上线
+#GPU用户下载server包使用这个链接
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_server_gpu-0.3.2-py2-none-any.whl
+python -m pip install paddle_serving_server_gpu-0.3.2-py2-none-any.whl
+#CPU版本使用这个链接
+wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_server-0.3.2-py2-none-any.whl
+python -m pip install paddle_serving_server-0.3.2-py2-none-any.whl
+
+#客户端和App包使用以下链接(CPU,GPU通用)
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_app-0.1.2-py2-none-any.whl
wget --no-check-certificate https://paddle-serving.bj.bcebos.com/others/paddle_serving_client-0.3.2-cp27-none-any.whl
-python -m pip install paddle_serving_app-0.1.2-py2-none-any.whl paddle_serving_server_gpu-0.3.2-py2-none-any.whl paddle_serving_client-0.3.2-cp27-none-any.whl
+python -m pip install paddle_serving_app-0.1.2-py2-none-any.whl paddle_serving_client-0.3.2-cp27-none-any.whl
```
### 2. 模型转换
@@ -42,6 +55,23 @@ tar -xzvf ocr_det.tar.gz
```
执行上述命令会下载`db_crnn_mobile`的模型,如果想要下载规模更大的`db_crnn_server`模型,可以在下载预测模型并解压之后。参考[如何从Paddle保存的预测模型转为Paddle Serving格式可部署的模型](https://github.com/PaddlePaddle/Serving/blob/develop/doc/INFERENCE_TO_SERVING_CN.md)。
+我们以`ch_rec_r34_vd_crnn`模型作为例子,下载链接在:
+
+```
+wget --no-check-certificate https://paddleocr.bj.bcebos.com/ch_models/ch_rec_r34_vd_crnn_infer.tar
+tar xf ch_rec_r34_vd_crnn_infer.tar
+```
+因此我们按照Serving模型转换教程,运行下列python文件。
+```
+from paddle_serving_client.io import inference_model_to_serving
+inference_model_dir = "ch_rec_r34_vd_crnn"
+serving_client_dir = "serving_client_dir"
+serving_server_dir = "serving_server_dir"
+feed_var_names, fetch_var_names = inference_model_to_serving(
+ inference_model_dir, serving_client_dir, serving_server_dir, model_filename="model", params_filename="params")
+```
+最终会在`serving_client_dir`和`serving_server_dir`生成客户端和服务端的模型配置。
+
### 3. 启动服务
启动服务可以根据实际需求选择启动`标准版`或者`快速版`,两种方式的对比如下表:
@@ -53,14 +83,21 @@ tar -xzvf ocr_det.tar.gz
#### 方式1. 启动标准版服务
```
+# cpu,gpu启动二选一,以下是cpu启动
+python -m paddle_serving_server.serve --model ocr_det_model --port 9293
+python ocr_web_server.py cpu
+# gpu启动
python -m paddle_serving_server_gpu.serve --model ocr_det_model --port 9293 --gpu_id 0
-python ocr_web_server.py
+python ocr_web_server.py gpu
```
#### 方式2. 启动快速版服务
```
-python ocr_local_server.py
+# cpu,gpu启动二选一,以下是cpu启动
+python ocr_local_server.py cpu
+# gpu启动
+python ocr_local_server.py gpu
```
## 发送预测请求
@@ -85,7 +122,7 @@ python ocr_web_client.py
在`ocr_web_server.py`或是`ocr_local_server.py`当中的`preprocess`函数里面做了检测服务和识别服务的前处理,`postprocess`函数里面做了识别的后处理服务,可以在相应的函数中做修改。调用了`paddle_serving_app`库提供的常见CV模型的前处理/后处理库。
-如果想要单独启动Paddle Serving的检测服务和识别服务,参见下列表格, 执行对应的脚本即可。
+如果想要单独启动Paddle Serving的检测服务和识别服务,参见下列表格, 执行对应的脚本即可,并且在命令行参数注明用的CPU或是GPU来提供服务。
| 模型 | 标准版 | 快速版 |
| ---- | ----------------- | ------------------- |
diff --git a/deploy/pdserving/rec_local_server.py b/deploy/pdserving/rec_local_server.py
index fbe67aafee5c8dcae269cd4ad6f6100ed514f0b7..ba021c1cd5054071eb115b3e6e9c64cb572ff871 100644
--- a/deploy/pdserving/rec_local_server.py
+++ b/deploy/pdserving/rec_local_server.py
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes, GetRotateCropImage, SortedBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
import time
import re
import base64
@@ -65,8 +68,12 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_rec_model")
-ocr_service.set_gpus("0")
ocr_service.init_rec()
-ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
-ocr_service.run_debugger_service()
+if sys.argv[1] == 'gpu':
+ ocr_service.set_gpus("0")
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+ ocr_service.run_debugger_service(gpu=True)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="cpu")
+ ocr_service.run_debugger_service()
ocr_service.run_web_service()
diff --git a/deploy/pdserving/rec_web_server.py b/deploy/pdserving/rec_web_server.py
index 684c313d4d50cfe00c576c81aad05a810525dcce..0f4e9f6d264ed602f387bfaf0303cd59af7823fa 100644
--- a/deploy/pdserving/rec_web_server.py
+++ b/deploy/pdserving/rec_web_server.py
@@ -22,7 +22,10 @@ from paddle_serving_client import Client
from paddle_serving_app.reader import Sequential, URL2Image, ResizeByFactor
from paddle_serving_app.reader import Div, Normalize, Transpose
from paddle_serving_app.reader import DBPostProcess, FilterBoxes, GetRotateCropImage, SortedBoxes
-from paddle_serving_server_gpu.web_service import WebService
+if sys.argv[1] == 'gpu':
+ from paddle_serving_server_gpu.web_service import WebService
+elif sys.argv[1] == 'cpu':
+ from paddle_serving_server.web_service import WebService
import time
import re
import base64
@@ -64,8 +67,11 @@ class OCRService(WebService):
ocr_service = OCRService(name="ocr")
ocr_service.load_model_config("ocr_rec_model")
-ocr_service.set_gpus("0")
ocr_service.init_rec()
-ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+if sys.argv[1] == 'gpu':
+ ocr_service.set_gpus("0")
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="gpu", gpuid=0)
+elif sys.argv[1] == 'cpu':
+ ocr_service.prepare_server(workdir="workdir", port=9292, device="cpu")
ocr_service.run_rpc_service()
ocr_service.run_web_service()
diff --git a/doc/doc_ch/FAQ.md b/doc/doc_ch/FAQ.md
index 38b8baeed585fbd58a56eca310cf2dbe4392b5f7..752a4adbf09e41fc19a892e6f491f5d09cc938d3 100644
--- a/doc/doc_ch/FAQ.md
+++ b/doc/doc_ch/FAQ.md
@@ -39,7 +39,7 @@ PaddleOCR已完成Windows和Mac系统适配,运行时注意两点:1、在[
- 识别:
英文数据集,MJSynth和SynthText合成数据,数据量上千万。
中文数据集,LSVT街景数据集根据真值将图crop出来,并进行位置校准,总共30w张图像。此外基于LSVT的语料,合成数据500w。
-
+
其中,公开数据集都是开源的,用户可自行搜索下载,也可参考[中文数据集](./datasets.md),合成数据暂不开源,用户可使用开源合成工具自行合成,可参考的合成工具包括[text_renderer](https://github.com/Sanster/text_renderer)、[SynthText](https://github.com/ankush-me/SynthText)、[TextRecognitionDataGenerator](https://github.com/Belval/TextRecognitionDataGenerator)等。
10. **使用带TPS的识别模型预测报错**
@@ -49,3 +49,5 @@ PaddleOCR已完成Windows和Mac系统适配,运行时注意两点:1、在[
11. **自定义字典训练的模型,识别结果出现字典里没出现的字**
预测时没有设置采用的自定义字典路径。设置方法是在预测时,通过增加输入参数rec_char_dict_path来设置。
+12. **cpp infer与python inference的结果不一致,相差较大**
+导出的inference model版本与预测库版本需要保持一致,比如在Windows下,Paddle官网提供的预测库版本是1.8,而PaddleOCR提供的inference model 版本是1.7,因此最终预测结果会有差别。可以在Paddle1.8环境下导出模型,再基于该模型进行预测。
diff --git a/doc/doc_ch/config.md b/doc/doc_ch/config.md
index bee2637094b1386210677788f0944d232f7ff82c..fe8db9c893cf0e6190111de5fe7627d2fe52a4fd 100644
--- a/doc/doc_ch/config.md
+++ b/doc/doc_ch/config.md
@@ -32,6 +32,9 @@
| loss_type | 设置 loss 类型 | ctc | 支持两种loss: ctc / attention |
| distort | 设置是否使用数据增强 | false | 设置为true时,将在训练时随机进行扰动,支持的扰动操作可阅读[img_tools.py](https://github.com/PaddlePaddle/PaddleOCR/blob/develop/ppocr/data/rec/img_tools.py) |
| use_space_char | 设置是否识别空格 | false | 仅在 character_type=ch 时支持空格 |
+| average_window | ModelAverage优化器中的窗口长度计算比例 | 0.15 | 目前仅应用与SRN |
+| max_average_window | 平均值计算窗口长度的最大值 | 15625 | 推荐设置为一轮训练中mini-batchs的数目|
+| min_average_window | 平均值计算窗口长度的最小值 | 10000 | \ |
| reader_yml | 设置reader配置文件 | ./configs/rec/rec_icdar15_reader.yml | \ |
| pretrain_weights | 加载预训练模型路径 | ./pretrain_models/CRNN/best_accuracy | \ |
| checkpoints | 加载模型参数路径 | None | 用于中断后加载参数继续训练 |
@@ -60,6 +63,9 @@
| beta1 | 设置一阶矩估计的指数衰减率 | 0.9 | \ |
| beta2 | 设置二阶矩估计的指数衰减率 | 0.999 | \ |
| decay | 是否使用decay | \ | \ |
-| function(decay) | 设置decay方式 | cosine_decay | 目前只支持cosin_decay |
-| step_each_epoch | 每个epoch包含多少次迭代 | 20 | 计算方式:total_image_num / (batch_size_per_card * card_size) |
-| total_epoch | 总共迭代多少个epoch | 1000 | 与Global.epoch_num 一致 |
+| function(decay) | 设置decay方式 | - | 目前支持cosine_decay, cosine_decay_warmup与piecewise_decay |
+| step_each_epoch | 每个epoch包含多少次迭代, cosine_decay/cosine_decay_warmup时有效 | 20 | 计算方式:total_image_num / (batch_size_per_card * card_size) |
+| total_epoch | 总共迭代多少个epoch, cosine_decay/cosine_decay_warmup时有效 | 1000 | 与Global.epoch_num 一致 |
+| warmup_minibatch | 线性warmup的迭代次数, cosine_decay_warmup时有效 | 1000 | \ |
+| boundaries | 学习率下降时的迭代次数间隔, piecewise_decay时有效 | - | 参数为列表形式 |
+| decay_rate | 学习率衰减系数, piecewise_decay时有效 | - | \ |
diff --git a/doc/doc_ch/detection.md b/doc/doc_ch/detection.md
index cfff12f976f0a0abe06c37187755d78358e5ca25..84c90d18a4ac5e1133a8202d574b789848060855 100644
--- a/doc/doc_ch/detection.md
+++ b/doc/doc_ch/detection.md
@@ -1,13 +1,13 @@
# 文字检测
-本节以icdar15数据集为例,介绍PaddleOCR中检测模型的训练、评估与测试。
+本节以icdar2015数据集为例,介绍PaddleOCR中检测模型的训练、评估与测试。
## 数据准备
icdar2015数据集可以从[官网](https://rrc.cvc.uab.es/?ch=4&com=downloads)下载到,首次下载需注册。
将下载到的数据集解压到工作目录下,假设解压在 PaddleOCR/train_data/ 下。另外,PaddleOCR将零散的标注文件整理成单独的标注文件
,您可以通过wget的方式进行下载。
-```
+```shell
# 在PaddleOCR路径下
cd PaddleOCR/
wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/train_icdar2015_label.txt
@@ -23,21 +23,21 @@ wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/test_icdar2015_la
└─ test_icdar2015_label.txt icdar数据集的测试标注
```
-提供的标注文件格式为,其中中间是"\t"分隔:
+提供的标注文件格式如下,中间用"\t"分隔:
```
" 图像文件名 json.dumps编码的图像标注信息"
-ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]], ...}]
+ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]]}, {...}]
```
json.dumps编码前的图像标注信息是包含多个字典的list,字典中的 `points` 表示文本框的四个点的坐标(x, y),从左上角的点开始顺时针排列。
-`transcription` 表示当前文本框的文字,在文本检测任务中并不需要这个信息。
-如果您想在其他数据集上训练PaddleOCR,可以按照上述形式构建标注文件。
+`transcription` 表示当前文本框的文字,**当其内容为“###”时,表示该文本框无效,在训练时会跳过。**
+如果您想在其他数据集上训练,可以按照上述形式构建标注文件。
## 快速启动训练
首先下载模型backbone的pretrain model,PaddleOCR的检测模型目前支持两种backbone,分别是MobileNetV3、ResNet50_vd,
您可以根据需求使用[PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/master/ppcls/modeling/architectures)中的模型更换backbone。
-```
+```shell
cd PaddleOCR/
# 下载MobileNetV3的预训练模型
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_5_pretrained.tar
@@ -45,7 +45,7 @@ wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/Mob
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar
# 解压预训练模型文件,以MobileNetV3为例
-tar xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models/
+tar -xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models/
# 注:正确解压backbone预训练权重文件后,文件夹下包含众多以网络层命名的权重文件,格式如下:
./pretrain_models/MobileNetV3_large_x0_5_pretrained/
@@ -57,11 +57,11 @@ tar xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models
```
-**启动训练**
+#### 启动训练
*如果您安装的是cpu版本,请将配置文件中的 `use_gpu` 字段修改为false*
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=./pretrain_models/MobileNetV3_large_x0_5_pretrained/
```
@@ -69,52 +69,52 @@ python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.pretrain_weights=
有关配置文件的详细解释,请参考[链接](./config.md)。
您也可以通过-o参数在不需要修改yml文件的情况下,改变训练的参数,比如,调整训练的学习率为0.0001
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Optimizer.base_lr=0.0001
```
-**断点训练**
+#### 断点训练
如果训练程序中断,如果希望加载训练中断的模型从而恢复训练,可以通过指定Global.checkpoints指定要加载的模型路径:
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./your/trained/model
```
-**注意**:Global.checkpoints的优先级高于Global.pretrain_weights的优先级,即同时指定两个参数时,优先加载Global.checkpoints指定的模型,如果Global.checkpoints指定的模型路径有误,会加载Global.pretrain_weights指定的模型。
+**注意**:`Global.checkpoints`的优先级高于`Global.pretrain_weights`的优先级,即同时指定两个参数时,优先加载`Global.checkpoints`指定的模型,如果`Global.checkpoints`指定的模型路径有误,会加载`Global.pretrain_weights`指定的模型。
## 指标评估
PaddleOCR计算三个OCR检测相关的指标,分别是:Precision、Recall、Hmean。
-运行如下代码,根据配置文件det_db_mv3.yml中save_res_path指定的测试集检测结果文件,计算评估指标。
+运行如下代码,根据配置文件`det_db_mv3.yml`中`save_res_path`指定的测试集检测结果文件,计算评估指标。
-评估时设置后处理参数box_thresh=0.6,unclip_ratio=1.5,使用不同数据集、不同模型训练,可调整这两个参数进行优化
-```
+评估时设置后处理参数`box_thresh=0.6`,`unclip_ratio=1.5`,使用不同数据集、不同模型训练,可调整这两个参数进行优化
+```shell
python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
```
-训练中模型参数默认保存在Global.save_model_dir目录下。在评估指标时,需要设置Global.checkpoints指向保存的参数文件。
+训练中模型参数默认保存在`Global.save_model_dir`目录下。在评估指标时,需要设置`Global.checkpoints`指向保存的参数文件。
比如:
-```
+```shell
python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="./output/det_db/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
```
-* 注:box_thresh、unclip_ratio是DB后处理所需要的参数,在评估EAST模型时不需要设置
+* 注:`box_thresh`、`unclip_ratio`是DB后处理所需要的参数,在评估EAST模型时不需要设置
## 测试检测效果
测试单张图像的检测效果
-```
+```shell
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/img_10.jpg" Global.checkpoints="./output/det_db/best_accuracy"
```
测试DB模型时,调整后处理阈值,
-```
+```shell
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/img_10.jpg" Global.checkpoints="./output/det_db/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
```
测试文件夹下所有图像的检测效果
-```
+```shell
python3 tools/infer_det.py -c configs/det/det_mv3_db.yml -o TestReader.infer_img="./doc/imgs_en/" Global.checkpoints="./output/det_db/best_accuracy"
```
diff --git a/doc/doc_ch/inference.md b/doc/doc_ch/inference.md
index 64ee8f4247c8e57e3211309d32cb93a883dc7b7d..293fee2f4291a8400661de1ed08f0c6807eef977 100644
--- a/doc/doc_ch/inference.md
+++ b/doc/doc_ch/inference.md
@@ -1,14 +1,37 @@
# 基于Python预测引擎推理
-inference 模型(fluid.io.save_inference_model保存的模型)
-一般是模型训练完成后保存的固化模型,多用于预测部署。
-训练过程中保存的模型是checkpoints模型,保存的是模型的参数,多用于恢复训练等。
+inference 模型(`fluid.io.save_inference_model`保存的模型)
+一般是模型训练完成后保存的固化模型,多用于预测部署。训练过程中保存的模型是checkpoints模型,保存的是模型的参数,多用于恢复训练等。
与checkpoints模型相比,inference 模型会额外保存模型的结构信息,在预测部署、加速推理上性能优越,灵活方便,适合与实际系统集成。更详细的介绍请参考文档[分类预测框架](https://paddleclas.readthedocs.io/zh_CN/latest/extension/paddle_inference.html).
接下来首先介绍如何将训练的模型转换成inference模型,然后将依次介绍文本检测、文本识别以及两者串联基于预测引擎推理。
+
+- [一、训练模型转inference模型](#训练模型转inference模型)
+ - [检测模型转inference模型](#检测模型转inference模型)
+ - [识别模型转inference模型](#识别模型转inference模型)
+
+- [二、文本检测模型推理](#文本检测模型推理)
+ - [1. 超轻量中文检测模型推理](#超轻量中文检测模型推理)
+ - [2. DB文本检测模型推理](#DB文本检测模型推理)
+ - [3. EAST文本检测模型推理](#EAST文本检测模型推理)
+ - [4. SAST文本检测模型推理](#SAST文本检测模型推理)
+
+- [三、文本识别模型推理](#文本识别模型推理)
+ - [1. 超轻量中文识别模型推理](#超轻量中文识别模型推理)
+ - [2. 基于CTC损失的识别模型推理](#基于CTC损失的识别模型推理)
+ - [3. 基于Attention损失的识别模型推理](#基于Attention损失的识别模型推理)
+ - [4. 自定义文本识别字典的推理](#自定义文本识别字典的推理)
+
+- [四、文本检测、识别串联推理](#文本检测、识别串联推理)
+ - [1. 超轻量中文OCR模型推理](#超轻量中文OCR模型推理)
+ - [2. 其他模型推理](#其他模型推理)
+
+
+
## 一、训练模型转inference模型
+
### 检测模型转inference模型
下载超轻量级中文检测模型:
@@ -24,15 +47,16 @@ wget -P ./ch_lite/ https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db.tar &
python3 tools/export_model.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./ch_lite/det_mv3_db/best_accuracy Global.save_inference_dir=./inference/det_db/
```
-转inference模型时,使用的配置文件和训练时使用的配置文件相同。另外,还需要设置配置文件中的Global.checkpoints、Global.save_inference_dir参数。
-其中Global.checkpoints指向训练中保存的模型参数文件,Global.save_inference_dir是生成的inference模型要保存的目录。
-转换成功后,在save_inference_dir 目录下有两个文件:
+转inference模型时,使用的配置文件和训练时使用的配置文件相同。另外,还需要设置配置文件中的`Global.checkpoints`、`Global.save_inference_dir`参数。
+其中`Global.checkpoints`指向训练中保存的模型参数文件,`Global.save_inference_dir`是生成的inference模型要保存的目录。
+转换成功后,在`save_inference_dir`目录下有两个文件:
```
inference/det_db/
└─ model 检测inference模型的program文件
└─ params 检测inference模型的参数文件
```
+
### 识别模型转inference模型
下载超轻量中文识别模型:
@@ -51,7 +75,7 @@ python3 tools/export_model.py -c configs/rec/rec_chinese_lite_train.yml -o Globa
Global.save_inference_dir=./inference/rec_crnn/
```
-如果您是在自己的数据集上训练的模型,并且调整了中文字符的字典文件,请注意修改配置文件中的character_dict_path是否是所需要的字典文件。
+**注意:**如果您是在自己的数据集上训练的模型,并且调整了中文字符的字典文件,请注意修改配置文件中的`character_dict_path`是否是所需要的字典文件。
转换成功后,在目录下有两个文件:
```
@@ -60,11 +84,13 @@ python3 tools/export_model.py -c configs/rec/rec_chinese_lite_train.yml -o Globa
└─ params 识别inference模型的参数文件
```
+
## 二、文本检测模型推理
-下面将介绍超轻量中文检测模型推理、DB文本检测模型推理和EAST文本检测模型推理。默认配置是根据DB文本检测模型推理设置的。由于EAST和DB算法差别很大,在推理时,需要通过传入相应的参数适配EAST文本检测算法。
+文本检测模型推理,默认使用DB模型的配置参数。当不使用DB模型时,在推理时,需要通过传入相应的参数进行算法适配,细节参考下文。
-### 1.超轻量中文检测模型推理
+
+### 1. 超轻量中文检测模型推理
超轻量中文检测模型推理,可以执行如下命令:
@@ -72,11 +98,11 @@ python3 tools/export_model.py -c configs/rec/rec_chinese_lite_train.yml -o Globa
python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_dir="./inference/det_db/"
```
-可视化文本检测结果默认保存到 ./inference_results 文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
+可视化文本检测结果默认保存到`./inference_results`文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
![](../imgs_results/det_res_2.jpg)
-通过设置参数det_max_side_len的大小,改变检测算法中图片规范化的最大值。当图片的长宽都小于det_max_side_len,则使用原图预测,否则将图片等比例缩放到最大值,进行预测。该参数默认设置为det_max_side_len=960. 如果输入图片的分辨率比较大,而且想使用更大的分辨率预测,可以执行如下命令:
+通过设置参数`det_max_side_len`的大小,改变检测算法中图片规范化的最大值。当图片的长宽都小于`det_max_side_len`,则使用原图预测,否则将图片等比例缩放到最大值,进行预测。该参数默认设置为`det_max_side_len=960`。 如果输入图片的分辨率比较大,而且想使用更大的分辨率预测,可以执行如下命令:
```
python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_dir="./inference/det_db/" --det_max_side_len=1200
@@ -87,7 +113,8 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_di
python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_dir="./inference/det_db/" --use_gpu=False
```
-### 2.DB文本检测模型推理
+
+### 2. DB文本检测模型推理
首先将DB文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)),可以使用如下命令进行转换:
@@ -105,13 +132,14 @@ DB文本检测模型推理,可以执行如下命令:
python3 tools/infer/predict_det.py --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_db/"
```
-可视化文本检测结果默认保存到 ./inference_results 文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
+可视化文本检测结果默认保存到`./inference_results`文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
![](../imgs_results/det_res_img_10_db.jpg)
-**注意**:由于ICDAR2015数据集只有1000张训练图像,主要针对英文场景,所以上述模型对中文文本图像检测效果非常差。
+**注意**:由于ICDAR2015数据集只有1000张训练图像,且主要针对英文场景,所以上述模型对中文文本图像检测效果会比较差。
-### 3.EAST文本检测模型推理
+
+### 3. EAST文本检测模型推理
首先将EAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)),可以使用如下命令进行转换:
@@ -123,24 +151,59 @@ python3 tools/infer/predict_det.py --image_dir="./doc/imgs_en/img_10.jpg" --det_
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints="./models/det_r50_vd_east/best_accuracy" Global.save_inference_dir="./inference/det_east"
```
-EAST文本检测模型推理,需要设置参数det_algorithm,指定检测算法类型为EAST,可以执行如下命令:
+**EAST文本检测模型推理,需要设置参数`--det_algorithm="EAST"`**,可以执行如下命令:
```
python3 tools/infer/predict_det.py --det_algorithm="EAST" --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_east/"
```
-可视化文本检测结果默认保存到 ./inference_results 文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
+可视化文本检测结果默认保存到`./inference_results`文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
![](../imgs_results/det_res_img_10_east.jpg)
-**注意**:本代码库中EAST后处理中NMS采用的Python版本,所以预测速度比较耗时。如果采用C++版本,会有明显加速。
+**注意**:本代码库中,EAST后处理Locality-Aware NMS有python和c++两种版本,c++版速度明显快于python版。由于c++版本nms编译版本问题,只有python3.5环境下会调用c++版nms,其他情况将调用python版nms。
+
+
+
+### 4. SAST文本检测模型推理
+#### (1). 四边形文本检测模型(ICDAR2015)
+首先将SAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在ICDAR2015英文数据集训练的模型为例([模型下载地址](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)),可以使用如下命令进行转换:
+```
+python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o Global.checkpoints="./models/sast_r50_vd_icdar2015/best_accuracy" Global.save_inference_dir="./inference/det_sast_ic15"
+```
+**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`**,可以执行如下命令:
+```
+python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_sast_ic15/"
+```
+可视化文本检测结果默认保存到`./inference_results`文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
+
+![](../imgs_results/det_res_img_10_sast.jpg)
+
+#### (2). 弯曲文本检测模型(Total-Text)
+首先将SAST文本检测训练过程中保存的模型,转换成inference model。以基于Resnet50_vd骨干网络,在Total-Text英文数据集训练的模型为例([模型下载地址](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)),可以使用如下命令进行转换:
+
+```
+python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints="./models/sast_r50_vd_total_text/best_accuracy" Global.save_inference_dir="./inference/det_sast_tt"
+```
+
+**SAST文本检测模型推理,需要设置参数`--det_algorithm="SAST"`,同时,还需要增加参数`--det_sast_polygon=True`,**可以执行如下命令:
+```
+python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
+```
+可视化文本检测结果默认保存到`./inference_results`文件夹里面,结果文件的名称前缀为'det_res'。结果示例如下:
+![](../imgs_results/det_res_img623_sast.jpg)
+**注意**:本代码库中,SAST后处理Locality-Aware NMS有python和c++两种版本,c++版速度明显快于python版。由于c++版本nms编译版本问题,只有python3.5环境下会调用c++版nms,其他情况将调用python版nms。
+
+
+
## 三、文本识别模型推理
下面将介绍超轻量中文识别模型推理、基于CTC损失的识别模型推理和基于Attention损失的识别模型推理。对于中文文本识别,建议优先选择基于CTC损失的识别模型,实践中也发现基于Attention损失的效果不如基于CTC损失的识别模型。此外,如果训练时修改了文本的字典,请参考下面的自定义文本识别字典的推理。
-### 1.超轻量中文识别模型推理
+
+### 1. 超轻量中文识别模型推理
超轻量中文识别模型推理,可以执行如下命令:
@@ -155,7 +218,8 @@ python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words/ch/word_4.jpg"
Predicts of ./doc/imgs_words/ch/word_4.jpg:['实力活力', 0.89552695]
-### 2.基于CTC损失的识别模型推理
+
+### 2. 基于CTC损失的识别模型推理
我们以STAR-Net为例,介绍基于CTC损失的识别模型推理。 CRNN和Rosetta使用方式类似,不用设置识别算法参数rec_algorithm。
@@ -176,7 +240,8 @@ STAR-Net文本识别模型推理,可以执行如下命令:
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
```
-### 3.基于Attention损失的识别模型推理
+
+### 3. 基于Attention损失的识别模型推理
基于Attention损失的识别模型与ctc不同,需要额外设置识别算法参数 --rec_algorithm="RARE"
@@ -202,16 +267,18 @@ self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz"
dict_character = list(self.character_str)
```
-### 4.自定义文本识别字典的推理
+
+### 4. 自定义文本识别字典的推理
如果训练时修改了文本的字典,在使用inference模型预测时,需要通过`--rec_char_dict_path`指定使用的字典路径
```
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./your inference model" --rec_image_shape="3, 32, 100" --rec_char_type="en" --rec_char_dict_path="your text dict path"
```
+
## 四、文本检测、识别串联推理
-
-### 1.超轻量中文OCR模型推理
+
+### 1. 超轻量中文OCR模型推理
在执行预测时,需要通过参数image_dir指定单张图像或者图像集合的路径、参数det_model_dir指定检测inference模型的路径和参数rec_model_dir指定识别inference模型的路径。可视化识别结果默认保存到 ./inference_results 文件夹里面。
@@ -223,9 +290,14 @@ python3 tools/infer/predict_system.py --image_dir="./doc/imgs/2.jpg" --det_model
![](../imgs_results/2.jpg)
-### 2.其他模型推理
+
+### 2. 其他模型推理
+
+如果想尝试使用其他检测算法或者识别算法,请参考上述文本检测模型推理和文本识别模型推理,更新相应配置和模型。
+
+**注意:由于检测框矫正逻辑的局限性,暂不支持使用SAST弯曲文本检测模型(即,使用参数`--det_sast_polygon=True`时)进行模型串联。**
-如果想尝试使用其他检测算法或者识别算法,请参考上述文本检测模型推理和文本识别模型推理,更新相应配置和模型,下面给出基于EAST文本检测和STAR-Net文本识别执行命令:
+下面给出基于EAST文本检测和STAR-Net文本识别执行命令:
```
python3 tools/infer/predict_system.py --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_east/" --det_algorithm="EAST" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
diff --git a/doc/doc_ch/installation.md b/doc/doc_ch/installation.md
index 8d7b659eb3c71df38f57b01e8510680ae135e35f..5bc14b897aacb3c71e26291cc2028a71e709ef21 100644
--- a/doc/doc_ch/installation.md
+++ b/doc/doc_ch/installation.md
@@ -11,7 +11,7 @@ PaddleOCR 工作环境
*如您希望使用 mac 或 windows直接运行预测代码,可以从第2步开始执行。*
-1. (建议)准备docker环境。第一次使用这个镜像,会自动下载该镜像,请耐心等待。
+**1. (建议)准备docker环境。第一次使用这个镜像,会自动下载该镜像,请耐心等待。**
```
# 切换到工作目录下
cd /home/Projects
@@ -21,10 +21,10 @@ cd /home/Projects
如果您希望在CPU环境下使用docker,使用docker而不是nvidia-docker创建docker
sudo docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev /bin/bash
-如果您的机器安装的是CUDA9,请运行以下命令创建容器
+如果使用CUDA9,请运行以下命令创建容器
sudo nvidia-docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev /bin/bash
-如果您的机器安装的是CUDA10,请运行以下命令创建容器
+如果使用CUDA10,请运行以下命令创建容器
sudo nvidia-docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda10.0-cudnn7-dev /bin/bash
您也可以访问[DockerHub](https://hub.docker.com/r/paddlepaddle/paddle/tags/)获取与您机器适配的镜像。
@@ -47,19 +47,7 @@ docker images
hub.baidubce.com/paddlepaddle/paddle latest-gpu-cuda9.0-cudnn7-dev f56310dcc829
```
-2. 更改python3默认版本
-
-docker中的python默认使用python3.5,PaddleOCR需要在Python3.7下执行(该版本下,对于第三方依赖库的兼容性更好一些)。进入docker后,可以编辑`/etc/profile`文件,之后在文件末尾添加
-
-```shell
-
-alias python3=python3.7
-alias pip3=pip3.7
-```
-
-保存之后,使用`source /etc/profile`命令使设置的默认Python生效。
-
-3. 安装PaddlePaddle Fluid v1.7
+**2. 安装PaddlePaddle Fluid v1.7**
```
pip3 install --upgrade pip
@@ -76,7 +64,7 @@ python3 -m pip install paddlepaddle==1.7.2 -i https://pypi.tuna.tsinghua.edu.cn/
更多的版本需求,请参照[安装文档](https://www.paddlepaddle.org.cn/install/quick)中的说明进行操作。
```
-4. 克隆PaddleOCR repo代码
+**3. 克隆PaddleOCR repo代码**
```
【推荐】git clone https://github.com/PaddlePaddle/PaddleOCR
@@ -87,7 +75,7 @@ git clone https://gitee.com/paddlepaddle/PaddleOCR
注:码云托管代码可能无法实时同步本github项目更新,存在3~5天延时,请优先使用推荐方式。
```
-5. 安装第三方库
+**4. 安装第三方库**
```
cd PaddleOCR
pip3 install -r requirments.txt
diff --git a/doc/doc_ch/recognition.md b/doc/doc_ch/recognition.md
index eda456c5c5d7573fd89de2a8ac0c1042a5b3a59b..c554b9f11c96744ae928aaf9992606a364680557 100644
--- a/doc/doc_ch/recognition.md
+++ b/doc/doc_ch/recognition.md
@@ -18,6 +18,8 @@ ln -sf /train_data/dataset
若您本地没有数据集,可以在官网下载 [icdar2015](http://rrc.cvc.uab.es/?ch=4&com=downloads) 数据,用于快速验证。也可以参考[DTRB](https://github.com/clovaai/deep-text-recognition-benchmark#download-lmdb-dataset-for-traininig-and-evaluation-from-here),下载 benchmark 所需的lmdb格式数据集。
+如果希望复现SRN的论文指标,需要下载离线[增广数据](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA),提取码: y3ry。增广数据是由MJSynth和SynthText做旋转和扰动得到的。数据下载完成后请解压到 {your_path}/PaddleOCR/train_data/data_lmdb_release/training/ 路径下。
+
* 使用自己数据集:
若您希望使用自己的数据进行训练,请参考下文组织您的数据。
@@ -161,6 +163,7 @@ PaddleOCR支持训练和评估交替进行, 可以在 `configs/rec/rec_icdar15_t
| rec_r34_vd_none_none_ctc.yml | Rosetta | Resnet34_vd | None | None | ctc |
| rec_r34_vd_tps_bilstm_attn.yml | RARE | Resnet34_vd | tps | BiLSTM | attention |
| rec_r34_vd_tps_bilstm_ctc.yml | STARNet | Resnet34_vd | tps | BiLSTM | ctc |
+| rec_r50fpn_vd_none_srn.yml | SRN | Resnet50_fpn_vd | None | rnn | srn |
训练中文数据,推荐使用`rec_chinese_lite_train.yml`,如您希望尝试其他算法在中文数据集上的效果,请参考下列说明修改配置文件:
diff --git a/doc/doc_ch/update.md b/doc/doc_ch/update.md
index e22d05b4316ebabbb7bb036e1be1b09600538925..1cd7788511c29df8934efe2c1462aaca68c9b92b 100644
--- a/doc/doc_ch/update.md
+++ b/doc/doc_ch/update.md
@@ -1,4 +1,5 @@
# 更新
+- 2020.8.16 开源文本检测算法[SAST](https://arxiv.org/abs/1908.05498)和文本识别算法[SRN](https://arxiv.org/abs/2003.12294)
- 2020.7.23 发布7月21日B站直播课回放和PPT,PaddleOCR开源大礼包全面解读,[获取地址](https://aistudio.baidu.com/aistudio/course/introduce/1519)
- 2020.7.15 添加基于EasyEdge和Paddle-Lite的移动端DEMO,支持iOS和Android系统
- 2020.7.15 完善预测部署,添加基于C++预测引擎推理、服务化部署和端侧部署方案,以及超轻量级中文OCR模型预测耗时Benchmark
diff --git a/doc/doc_en/FAQ_en.md b/doc/doc_en/FAQ_en.md
index 04feb363777801088efa0425195afd9e065a5b1e..a89567f7d912c815d802f021bf8b751f7d94e25c 100644
--- a/doc/doc_en/FAQ_en.md
+++ b/doc/doc_en/FAQ_en.md
@@ -48,6 +48,9 @@ At present, the open source model, dataset and magnitude are as follows:
Error message: Input(X) dims[3] and Input(Grid) dims[2] should be equal, but received X dimension[3](108) != Grid dimension[2](100)
Solution:TPS does not support variable shape. Please set --rec_image_shape='3,32,100' and --rec_char_type='en'
-11. **Custom dictionary used during training, the recognition results show that words do not appear in the dictionary**
+11. **Custom dictionary used during training, the recognition results show that words do not appear in the dictionary**
+The used custom dictionary path is not set when making prediction. The solution is setting parameter `rec_char_dict_path` to the corresponding dictionary file.
-The used custom dictionary path is not set when making prediction. The solution is setting parameter `rec_char_dict_path` to the corresponding dictionary file.
\ No newline at end of file
+
+12. **Results of cpp_infer and python_inference are very different**
+Versions of exprted inference model and inference libraray should be same. For example, on Windows platform, version of the inference libraray that PaddlePaddle provides is 1.8, but version of the inference model that PaddleOCR provides is 1.7, you should export model yourself(`tools/export_model.py`) on PaddlePaddle1.8 and then use the exported model for inference.
diff --git a/doc/doc_en/config_en.md b/doc/doc_en/config_en.md
index ffead1ee335a3d2f10491792a23a69e0f22a1755..b54def895f0758df7cdbd089253d6acd712d2b8e 100644
--- a/doc/doc_en/config_en.md
+++ b/doc/doc_en/config_en.md
@@ -60,6 +60,9 @@ Take `rec_icdar15_train.yml` as an example:
| beta1 | Set the exponential decay rate for the 1st moment estimates | 0.9 | \ |
| beta2 | Set the exponential decay rate for the 2nd moment estimates | 0.999 | \ |
| decay | Whether to use decay | \ | \ |
-| function(decay) | Set the decay function | cosine_decay | Only support cosine_decay |
-| step_each_epoch | The number of steps in an epoch. | 20 | Calculation :total_image_num / (batch_size_per_card * card_size) |
-| total_epoch | The number of epochs | 1000 | Consistent with Global.epoch_num |
+| function(decay) | Set the decay function | cosine_decay | Support cosine_decay, cosine_decay_warmup and piecewise_decay |
+| step_each_epoch | The number of steps in an epoch. Used in cosine_decay/cosine_decay_warmup | 20 | Calculation: total_image_num / (batch_size_per_card * card_size) |
+| total_epoch | The number of epochs. Used in cosine_decay/cosine_decay_warmup | 1000 | Consistent with Global.epoch_num |
+| warmup_minibatch | Number of steps for linear warmup. Used in cosine_decay_warmup | 1000 | \ |
+| boundaries | The step intervals to reduce learning rate. Used in piecewise_decay | - | The format is list |
+| decay_rate | Learning rate decay rate. Used in piecewise_decay | - | \ |
diff --git a/doc/doc_en/detection_en.md b/doc/doc_en/detection_en.md
index 280c4bf8b8b4e08da32c719dc8ceba6eb8391f26..08e6b63bb77ad1cb5ec4c741d6cad1d099f6c070 100644
--- a/doc/doc_en/detection_en.md
+++ b/doc/doc_en/detection_en.md
@@ -1,12 +1,12 @@
# TEXT DETECTION
-This section uses the icdar15 dataset as an example to introduce the training, evaluation, and testing of the detection model in PaddleOCR.
+This section uses the icdar2015 dataset as an example to introduce the training, evaluation, and testing of the detection model in PaddleOCR.
## DATA PREPARATION
The icdar2015 dataset can be obtained from [official website](https://rrc.cvc.uab.es/?ch=4&com=downloads). Registration is required for downloading.
Decompress the downloaded dataset to the working directory, assuming it is decompressed under PaddleOCR/train_data/. In addition, PaddleOCR organizes many scattered annotation files into two separate annotation files for train and test respectively, which can be downloaded by wget:
-```
+```shell
# Under the PaddleOCR path
cd PaddleOCR/
wget -P ./train_data/ https://paddleocr.bj.bcebos.com/dataset/train_icdar2015_label.txt
@@ -25,18 +25,21 @@ After decompressing the data set and downloading the annotation file, PaddleOCR/
The provided annotation file format is as follow, seperated by "\t":
```
" Image file name Image annotation information encoded by json.dumps"
-ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]], ...}]
+ch4_test_images/img_61.jpg [{"transcription": "MASA", "points": [[310, 104], [416, 141], [418, 216], [312, 179]]}, {...}]
```
-The image annotation after json.dumps() encoding is a list containing multiple dictionaries. The `points` in the dictionary represent the coordinates (x, y) of the four points of the text box, arranged clockwise from the point at the upper left corner.
+The image annotation after **json.dumps()** encoding is a list containing multiple dictionaries.
+
+The `points` in the dictionary represent the coordinates (x, y) of the four points of the text box, arranged clockwise from the point at the upper left corner.
+
+`transcription` represents the text of the current text box. **When its content is "###" it means that the text box is invalid and will be skipped during training.**
-`transcription` represents the text of the current text box, and this information is not needed in the text detection task.
-If you want to train PaddleOCR on other datasets, you can build the annotation file according to the above format.
+If you want to train PaddleOCR on other datasets, please build the annotation file according to the above format.
## TRAINING
First download the pretrained model. The detection model of PaddleOCR currently supports two backbones, namely MobileNetV3 and ResNet50_vd. You can use the model in [PaddleClas](https://github.com/PaddlePaddle/PaddleClas/tree/master/ppcls/modeling/architectures) to replace backbone according to your needs.
-```
+```shell
cd PaddleOCR/
# Download the pre-trained model of MobileNetV3
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/MobileNetV3_large_x0_5_pretrained.tar
@@ -44,7 +47,7 @@ wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/Mob
wget -P ./pretrain_models/ https://paddle-imagenet-models-name.bj.bcebos.com/ResNet50_vd_ssld_pretrained.tar
# decompressing the pre-training model file, take MobileNetV3 as an example
-tar xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models/
+tar -xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models/
# Note: After decompressing the backbone pre-training weight file correctly, the file list in the folder is as follows:
./pretrain_models/MobileNetV3_large_x0_5_pretrained/
@@ -56,9 +59,9 @@ tar xf ./pretrain_models/MobileNetV3_large_x0_5_pretrained.tar ./pretrain_models
```
-**START TRAINING**
+#### START TRAINING
*If CPU version installed, please set the parameter `use_gpu` to `false` in the configuration.*
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml
```
@@ -66,19 +69,19 @@ In the above instruction, use `-c` to select the training to use the `configs/de
For a detailed explanation of the configuration file, please refer to [config](./config_en.md).
You can also use `-o` to change the training parameters without modifying the yml file. For example, adjust the training learning rate to 0.0001
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Optimizer.base_lr=0.0001
```
-**load trained model and conntinue training**
+#### load trained model and conntinue training
If you expect to load trained model and continue the training again, you can specify the parameter `Global.checkpoints` as the model path to be loaded.
For example:
-```
+```shell
python3 tools/train.py -c configs/det/det_mv3_db.yml -o Global.checkpoints=./your/trained/model
```
-**Note**:The priority of `Global.checkpoints` is higher than that of `Global.pretrain_weights`, that is, when two parameters are specified at the same time, the model specified by Global.checkpoints will be loaded first. If the model path specified by `Global.checkpoints` is wrong, the one specified by `Global.pretrain_weights` will be loaded.
+**Note**: The priority of `Global.checkpoints` is higher than that of `Global.pretrain_weights`, that is, when two parameters are specified at the same time, the model specified by `Global.checkpoints` will be loaded first. If the model path specified by `Global.checkpoints` is wrong, the one specified by `Global.pretrain_weights` will be loaded.
## EVALUATION
@@ -89,7 +92,7 @@ Run the following code to calculate the evaluation indicators. The result will b
When evaluating, set post-processing parameters `box_thresh=0.6`, `unclip_ratio=1.5`. If you use different datasets, different models for training, these two parameters should be adjusted for better result.
-```
+```shell
python3 tools/eval.py -c configs/det/det_mv3_db.yml -o Global.checkpoints="{path/to/weights}/best_accuracy" PostProcess.box_thresh=0.6 PostProcess.unclip_ratio=1.5
```
The model parameters during training are saved in the `Global.save_model_dir` directory by default. When evaluating indicators, you need to set `Global.checkpoints` to point to the saved parameter file.
diff --git a/doc/doc_en/inference_en.md b/doc/doc_en/inference_en.md
index 58f2e3fb312af862f1eff4969772c8937d5cb767..83ec2a90c45a320815e10e8572d894068c0b5130 100644
--- a/doc/doc_en/inference_en.md
+++ b/doc/doc_en/inference_en.md
@@ -1,7 +1,7 @@
# Reasoning based on Python prediction engine
-The inference model (the model saved by fluid.io.save_inference_model) is generally a solidified model saved after the model training is completed, and is mostly used to give prediction in deployment.
+The inference model (the model saved by `fluid.io.save_inference_model`) is generally a solidified model saved after the model training is completed, and is mostly used to give prediction in deployment.
The model saved during the training process is the checkpoints model, which saves the parameters of the model and is mostly used to resume training.
@@ -9,7 +9,31 @@ Compared with the checkpoints model, the inference model will additionally save
Next, we first introduce how to convert a trained model into an inference model, and then we will introduce text detection, text recognition, and the concatenation of them based on inference model.
+- [CONVERT TRAINING MODEL TO INFERENCE MODEL](#CONVERT)
+ - [Convert detection model to inference model](#Convert_detection_model)
+ - [Convert recognition model to inference model](#Convert_recognition_model)
+
+
+- [TEXT DETECTION MODEL INFERENCE](#DETECTION_MODEL_INFERENCE)
+ - [1. LIGHTWEIGHT CHINESE DETECTION MODEL INFERENCE](#LIGHTWEIGHT_DETECTION)
+ - [2. DB TEXT DETECTION MODEL INFERENCE](#DB_DETECTION)
+ - [3. EAST TEXT DETECTION MODEL INFERENCE](#EAST_DETECTION)
+ - [4. SAST TEXT DETECTION MODEL INFERENCE](#SAST_DETECTION)
+
+- [TEXT RECOGNITION MODEL INFERENCE](#RECOGNITION_MODEL_INFERENCE)
+ - [1. LIGHTWEIGHT CHINESE MODEL](#LIGHTWEIGHT_RECOGNITION)
+ - [2. CTC-BASED TEXT RECOGNITION MODEL INFERENCE](#CTC-BASED_RECOGNITION)
+ - [3. ATTENTION-BASED TEXT RECOGNITION MODEL INFERENCE](#ATTENTION-BASED_RECOGNITION)
+ - [4. TEXT RECOGNITION MODEL INFERENCE USING CUSTOM CHARACTERS DICTIONARY](#USING_CUSTOM_CHARACTERS)
+
+
+- [TEXT DETECTION AND RECOGNITION INFERENCE CONCATENATION](#CONCATENATION)
+ - [1. LIGHTWEIGHT CHINESE MODEL](#LIGHTWEIGHT_CHINESE_MODEL)
+ - [2. OTHER MODELS](#OTHER_MODELS)
+
+
## CONVERT TRAINING MODEL TO INFERENCE MODEL
+
### Convert detection model to inference model
Download the lightweight Chinese detection model:
@@ -35,6 +59,7 @@ inference/det_db/
└─ params Check the parameter file of the inference model
```
+
### Convert recognition model to inference model
Download the lightweight Chinese recognition model:
@@ -62,11 +87,13 @@ After the conversion is successful, there are two files in the directory:
└─ params Identify the parameter files of the inference model
```
+
## TEXT DETECTION MODEL INFERENCE
The following will introduce the lightweight Chinese detection model inference, DB text detection model inference and EAST text detection model inference. The default configuration is based on the inference setting of the DB text detection model.
Because EAST and DB algorithms are very different, when inference, it is necessary to **adapt the EAST text detection algorithm by passing in corresponding parameters**.
+
### 1. LIGHTWEIGHT CHINESE DETECTION MODEL INFERENCE
For lightweight Chinese detection model inference, you can execute the following commands:
@@ -90,6 +117,7 @@ If you want to use the CPU for prediction, execute the command as follows
python3 tools/infer/predict_det.py --image_dir="./doc/imgs/2.jpg" --det_model_dir="./inference/det_db/" --use_gpu=False
```
+
### 2. DB TEXT DETECTION MODEL INFERENCE
First, convert the model saved in the DB text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](https://paddleocr.bj.bcebos.com/det_r50_vd_db.tar)), you can use the following command to convert:
@@ -114,6 +142,7 @@ The visualized text detection results are saved to the `./inference_results` fol
**Note**: Since the ICDAR2015 dataset has only 1,000 training images, mainly for English scenes, the above model has very poor detection result on Chinese text images.
+
### 3. EAST TEXT DETECTION MODEL INFERENCE
First, convert the model saved in the EAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](https://paddleocr.bj.bcebos.com/det_r50_vd_east.tar)), you can use the following command to convert:
@@ -126,23 +155,64 @@ First, convert the model saved in the EAST text detection training process into
python3 tools/export_model.py -c configs/det/det_r50_vd_east.yml -o Global.checkpoints="./models/det_r50_vd_east/best_accuracy" Global.save_inference_dir="./inference/det_east"
```
-For EAST text detection model inference, you need to set the parameter det_algorithm, specify the detection algorithm type to EAST, run the following command:
+**For EAST text detection model inference, you need to set the parameter ``--det_algorithm="EAST"``**, run the following command:
```
python3 tools/infer/predict_det.py --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_east/" --det_algorithm="EAST"
```
+
The visualized text detection results are saved to the `./inference_results` folder by default, and the name of the result file is prefixed with 'det_res'. Examples of results are as follows:
![](../imgs_results/det_res_img_10_east.jpg)
-**Note**: The Python version of NMS in EAST post-processing used in this codebase so the prediction speed is quite slow. If you use the C++ version, there will be a significant speedup.
+**Note**: EAST post-processing locality aware NMS has two versions: Python and C++. The speed of C++ version is obviously faster than that of Python version. Due to the compilation version problem of NMS of C++ version, C++ version NMS will be called only in Python 3.5 environment, and python version NMS will be called in other cases.
+
+
+
+### 4. SAST TEXT DETECTION MODEL INFERENCE
+#### (1). Quadrangle text detection model (ICDAR2015)
+First, convert the model saved in the SAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the ICDAR2015 English dataset as an example ([model download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_icdar2015.tar)), you can use the following command to convert:
+
+```
+python3 tools/export_model.py -c configs/det/det_r50_vd_sast_icdar15.yml -o Global.checkpoints="./models/sast_r50_vd_icdar2015/best_accuracy" Global.save_inference_dir="./inference/det_sast_ic15"
+```
+
+**For SAST quadrangle text detection model inference, you need to set the parameter `--det_algorithm="SAST"`**, run the following command:
+
+```
+python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_sast_ic15/"
+```
+
+The visualized text detection results are saved to the `./inference_results` folder by default, and the name of the result file is prefixed with 'det_res'. Examples of results are as follows:
+
+![](../imgs_results/det_res_img_10_sast.jpg)
+#### (2). Curved text detection model (Total-Text)
+First, convert the model saved in the SAST text detection training process into an inference model. Taking the model based on the Resnet50_vd backbone network and trained on the Total-Text English dataset as an example ([model download link](https://paddleocr.bj.bcebos.com/SAST/sast_r50_vd_total_text.tar)), you can use the following command to convert:
+```
+python3 tools/export_model.py -c configs/det/det_r50_vd_sast_totaltext.yml -o Global.checkpoints="./models/sast_r50_vd_total_text/best_accuracy" Global.save_inference_dir="./inference/det_sast_tt"
+```
+
+**For SAST curved text detection model inference, you need to set the parameter `--det_algorithm="SAST"` and `--det_sast_polygon=True`**, run the following command:
+
+```
+python3 tools/infer/predict_det.py --det_algorithm="SAST" --image_dir="./doc/imgs_en/img623.jpg" --det_model_dir="./inference/det_sast_tt/" --det_sast_polygon=True
+```
+
+The visualized text detection results are saved to the `./inference_results` folder by default, and the name of the result file is prefixed with 'det_res'. Examples of results are as follows:
+
+![](../imgs_results/det_res_img623_sast.jpg)
+
+**Note**: SAST post-processing locality aware NMS has two versions: Python and C++. The speed of C++ version is obviously faster than that of Python version. Due to the compilation version problem of NMS of C++ version, C++ version NMS will be called only in Python 3.5 environment, and python version NMS will be called in other cases.
+
+
## TEXT RECOGNITION MODEL INFERENCE
The following will introduce the lightweight Chinese recognition model inference, other CTC-based and Attention-based text recognition models inference. For Chinese text recognition, it is recommended to choose the recognition model based on CTC loss. In practice, it is also found that the result of the model based on Attention loss is not as good as the one based on CTC loss. In addition, if the characters dictionary is modified during training, make sure that you use the same characters set during inferencing. Please check below for details.
+
### 1. LIGHTWEIGHT CHINESE TEXT RECOGNITION MODEL REFERENCE
For lightweight Chinese recognition model inference, you can execute the following commands:
@@ -158,6 +228,7 @@ After executing the command, the prediction results (recognized text and score)
Predicts of ./doc/imgs_words/ch/word_4.jpg:['实力活力', 0.89552695]
+
### 2. CTC-BASED TEXT RECOGNITION MODEL INFERENCE
Taking STAR-Net as an example, we introduce the recognition model inference based on CTC loss. CRNN and Rosetta are used in a similar way, by setting the recognition algorithm parameter `rec_algorithm`.
@@ -178,6 +249,7 @@ For STAR-Net text recognition model inference, execute the following commands:
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
```
+
### 3. ATTENTION-BASED TEXT RECOGNITION MODEL INFERENCE
![](../imgs_words_en/word_336.png)
@@ -196,6 +268,7 @@ self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz"
dict_character = list(self.character_str)
```
+
### 4. TEXT RECOGNITION MODEL INFERENCE USING CUSTOM CHARACTERS DICTIONARY
If the chars dictionary is modified during training, you need to specify the new dictionary path by setting the parameter `rec_char_dict_path` when using your inference model to predict.
@@ -203,8 +276,10 @@ If the chars dictionary is modified during training, you need to specify the new
python3 tools/infer/predict_rec.py --image_dir="./doc/imgs_words_en/word_336.png" --rec_model_dir="./your inference model" --rec_image_shape="3, 32, 100" --rec_char_type="en" --rec_char_dict_path="your text dict path"
```
+
## TEXT DETECTION AND RECOGNITION INFERENCE CONCATENATION
+
### 1. LIGHTWEIGHT CHINESE MODEL
When performing prediction, you need to specify the path of a single image or a folder of images through the parameter `image_dir`, the parameter `det_model_dir` specifies the path to detect the inference model, and the parameter `rec_model_dir` specifies the path to identify the inference model. The visualized recognition results are saved to the `./inference_results` folder by default.
@@ -217,9 +292,14 @@ After executing the command, the recognition result image is as follows:
![](../imgs_results/2.jpg)
+
### 2. OTHER MODELS
-If you want to try other detection algorithms or recognition algorithms, please refer to the above text detection model inference and text recognition model inference, update the corresponding configuration and model, the following command uses the combination of the EAST text detection and STAR-Net text recognition:
+If you want to try other detection algorithms or recognition algorithms, please refer to the above text detection model inference and text recognition model inference, update the corresponding configuration and model.
+
+**Note: due to the limitation of rotation logic of detected box, SAST curved text detection model (using the parameter `det_sast_polygon=True`) is not supported for model combination yet.**
+
+The following command uses the combination of the EAST text detection and STAR-Net text recognition:
```
python3 tools/infer/predict_system.py --image_dir="./doc/imgs_en/img_10.jpg" --det_model_dir="./inference/det_east/" --det_algorithm="EAST" --rec_model_dir="./inference/starnet/" --rec_image_shape="3, 32, 100" --rec_char_type="en"
diff --git a/doc/doc_en/installation_en.md b/doc/doc_en/installation_en.md
index 94aacd6d24ff0a2cab12d1820a110ef95f8c4e8b..37f66b05e72eee46cdb25dba758b84ff5037c4eb 100644
--- a/doc/doc_en/installation_en.md
+++ b/doc/doc_en/installation_en.md
@@ -11,21 +11,21 @@ It is recommended to use the docker provided by us to run PaddleOCR, please refe
*If you want to directly run the prediction code on mac or windows, you can start from step 2.*
-1. (Recommended) Prepare a docker environment. The first time you use this image, it will be downloaded automatically. Please be patient.
+**1. (Recommended) Prepare a docker environment. The first time you use this image, it will be downloaded automatically. Please be patient.**
```
# Switch to the working directory
cd /home/Projects
# You need to create a docker container for the first run, and do not need to run the current command when you run it again
# Create a docker container named ppocr and map the current directory to the /paddle directory of the container
-#If you want to use docker in a CPU environment, use docker instead of nvidia-docker to create docker
+#If using CPU, use docker instead of nvidia-docker to create docker
sudo docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev /bin/bash
```
-If you have cuda9 installed on your machine, please run the following command to create a container:
+If using CUDA9, please run the following command to create a container:
```
sudo nvidia-docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev /bin/bash
```
-If you have cuda10 installed on your machine, please run the following command to create a container:
+If using CUDA10, please run the following command to create a container:
```
sudo nvidia-docker run --name ppocr -v $PWD:/paddle --network=host -it hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda10.0-cudnn7-dev /bin/bash
```
@@ -49,20 +49,7 @@ docker images
hub.baidubce.com/paddlepaddle/paddle latest-gpu-cuda9.0-cudnn7-dev f56310dcc829
```
-2. Change default version of python3
-
-Python3.5 is used as the default version of python. However, Python3.7 is preferred in PaddleOCR for better compatibility of third-party libraries. After entering docker, you can edit file `/etc/profile`, add the following content at the end of the file.
-
-
-```shell
-
-alias python3=python3.7
-alias pip3=pip3.7
-```
-
-After saving the file `/etc/profile`. The command `source /etc/profile` needs to be carried out to make the default python version as 3.7 effective.
-
-3. Install PaddlePaddle Fluid v1.7 (the higher version is not supported yet, the adaptation work is in progress)
+**2. Install PaddlePaddle Fluid v1.7 (the higher version is not supported yet, the adaptation work is in progress)**
```
pip3 install --upgrade pip
@@ -78,7 +65,7 @@ python3 -m pip install paddlepaddle==1.7.2 -i https://pypi.tuna.tsinghua.edu.cn/
For more software version requirements, please refer to the instructions in [Installation Document](https://www.paddlepaddle.org.cn/install/quick) for operation.
-4. Clone PaddleOCR repo
+**3. Clone PaddleOCR repo**
```
# Recommend
git clone https://github.com/PaddlePaddle/PaddleOCR
@@ -90,7 +77,7 @@ git clone https://gitee.com/paddlepaddle/PaddleOCR
# Note: The cloud-hosting code may not be able to synchronize the update with this GitHub project in real time. There might be a delay of 3-5 days. Please give priority to the recommended method.
```
-5. Install third-party libraries
+**4. Install third-party libraries**
```
cd PaddleOCR
pip3 install -r requirments.txt
diff --git a/doc/doc_en/recognition_en.md b/doc/doc_en/recognition_en.md
index 9b34c4309ff22b57d3a0258b742d081fcc14f716..b9c42afada05950539d4dbfb45e133af946b1aae 100644
--- a/doc/doc_en/recognition_en.md
+++ b/doc/doc_en/recognition_en.md
@@ -18,6 +18,8 @@ ln -sf /train_data/dataset
If you do not have a dataset locally, you can download it on the official website [icdar2015](http://rrc.cvc.uab.es/?ch=4&com=downloads). Also refer to [DTRB](https://github.com/clovaai/deep-text-recognition-benchmark#download-lmdb-dataset-for-traininig-and-evaluation-from-here),download the lmdb format dataset required for benchmark
+If you want to reproduce the paper indicators of SRN, you need to download offline [augmented data](https://pan.baidu.com/s/1-HSZ-ZVdqBF2HaBZ5pRAKA), extraction code: y3ry. The augmented data is obtained by rotation and perturbation of mjsynth and synthtext. Please unzip the data to {your_path}/PaddleOCR/train_data/data_lmdb_Release/training/path.
+
* Use your own dataset:
If you want to use your own data for training, please refer to the following to organize your data.
diff --git a/doc/doc_en/update_en.md b/doc/doc_en/update_en.md
index ef02d9dbf7a1379d029ca6fcf819a06d5996151b..dc839d8955afcfa2d1efbee5e02d35f384d6c627 100644
--- a/doc/doc_en/update_en.md
+++ b/doc/doc_en/update_en.md
@@ -1,4 +1,5 @@
# RECENT UPDATES
+- 2020.8.16 Release text detection algorithm [SAST](https://arxiv.org/abs/1908.05498) and text recognition algorithm [SRN](https://arxiv.org/abs/2003.12294)
- 2020.7.23, Release the playback and PPT of live class on BiliBili station, PaddleOCR Introduction, [address](https://aistudio.baidu.com/aistudio/course/introduce/1519)
- 2020.7.15, Add mobile App demo , support both iOS and Android ( based on easyedge and Paddle Lite)
- 2020.7.15, Improve the deployment ability, add the C + + inference , serving deployment. In addtion, the benchmarks of the ultra-lightweight Chinese OCR model are provided.
diff --git a/doc/imgs_en/img623.jpg b/doc/imgs_en/img623.jpg
new file mode 100755
index 0000000000000000000000000000000000000000..2fae1b5b1ca1d557355933e93bbe268d8bba6778
Binary files /dev/null and b/doc/imgs_en/img623.jpg differ
diff --git a/doc/imgs_results/det_res_img623_sast.jpg b/doc/imgs_results/det_res_img623_sast.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b2dd538f7729724a33516091d11c081c7c2c1bd7
Binary files /dev/null and b/doc/imgs_results/det_res_img623_sast.jpg differ
diff --git a/doc/imgs_results/det_res_img_10_sast.jpg b/doc/imgs_results/det_res_img_10_sast.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c63faf1354601f25cedb57a3b87f4467999f5457
Binary files /dev/null and b/doc/imgs_results/det_res_img_10_sast.jpg differ
diff --git a/docker/hubserving/cpu/Dockerfile b/docker/hubserving/cpu/Dockerfile
new file mode 100755
index 0000000000000000000000000000000000000000..8404b4c007731fa02816aecfefc2fe3cd7da7c73
--- /dev/null
+++ b/docker/hubserving/cpu/Dockerfile
@@ -0,0 +1,28 @@
+# Version: 1.0.0
+FROM hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda9.0-cudnn7-dev
+
+# PaddleOCR base on Python3.7
+RUN pip3.7 install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN python3.7 -m pip install paddlepaddle==1.7.2 -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN pip3.7 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR
+
+WORKDIR /PaddleOCR
+
+RUN pip3.7 install -r requirments.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN mkdir -p /PaddleOCR/inference
+# Download orc detect model(light version). if you want to change normal version, you can change ch_det_mv3_db_infer to ch_det_r50_vd_db_infer, also remember change det_model_dir in deploy/hubserving/ocr_system/params.py)
+ADD https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar /PaddleOCR/inference
+RUN tar xf /PaddleOCR/inference/ch_det_mv3_db_infer.tar -C /PaddleOCR/inference
+
+# Download orc recognition model(light version). If you want to change normal version, you can change ch_rec_mv3_crnn_infer to ch_rec_r34_vd_crnn_enhance_infer, also remember change rec_model_dir in deploy/hubserving/ocr_system/params.py)
+ADD https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar /PaddleOCR/inference
+RUN tar xf /PaddleOCR/inference/ch_rec_mv3_crnn_infer.tar -C /PaddleOCR/inference
+
+EXPOSE 8866
+
+CMD ["/bin/bash","-c","export PYTHONPATH=. && hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"]
\ No newline at end of file
diff --git a/docker/hubserving/gpu/Dockerfile b/docker/hubserving/gpu/Dockerfile
new file mode 100755
index 0000000000000000000000000000000000000000..1320a7f34b05ee470fa2bcf30548295f11427237
--- /dev/null
+++ b/docker/hubserving/gpu/Dockerfile
@@ -0,0 +1,28 @@
+# Version: 1.0.0
+FROM hub.baidubce.com/paddlepaddle/paddle:latest-gpu-cuda10.0-cudnn7-dev
+
+# PaddleOCR base on Python3.7
+RUN pip3.7 install --upgrade pip -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN python3.7 -m pip install paddlepaddle-gpu==1.7.2.post107 -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN pip3.7 install paddlehub --upgrade -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN git clone https://gitee.com/PaddlePaddle/PaddleOCR
+
+WORKDIR /home/PaddleOCR
+
+RUN pip3.7 install -r requirments.txt -i https://pypi.tuna.tsinghua.edu.cn/simple
+
+RUN mkdir -p /PaddleOCR/inference
+# Download orc detect model(light version). if you want to change normal version, you can change ch_det_mv3_db_infer to ch_det_r50_vd_db_infer, also remember change det_model_dir in deploy/hubserving/ocr_system/params.py)
+ADD https://paddleocr.bj.bcebos.com/ch_models/ch_det_mv3_db_infer.tar /PaddleOCR/inference
+RUN tar xf /PaddleOCR/inference/ch_det_mv3_db_infer.tar -C /PaddleOCR/inference
+
+# Download orc recognition model(light version). If you want to change normal version, you can change ch_rec_mv3_crnn_infer to ch_rec_r34_vd_crnn_enhance_infer, also remember change rec_model_dir in deploy/hubserving/ocr_system/params.py)
+ADD https://paddleocr.bj.bcebos.com/ch_models/ch_rec_mv3_crnn_infer.tar /PaddleOCR/inference
+RUN tar xf /PaddleOCR/inference/ch_rec_mv3_crnn_infer.tar -C /PaddleOCR/inference
+
+EXPOSE 8866
+
+CMD ["/bin/bash","-c","export PYTHONPATH=. && hub install deploy/hubserving/ocr_system/ && hub serving start -m ocr_system"]
\ No newline at end of file
diff --git a/docker/hubserving/readme.md b/docker/hubserving/readme.md
new file mode 100644
index 0000000000000000000000000000000000000000..109e6aa6a536c146095b8f46516b1c895dc08337
--- /dev/null
+++ b/docker/hubserving/readme.md
@@ -0,0 +1,55 @@
+# Docker化部署服务
+在日常项目应用中,相信大家一般都会希望能通过Docker技术,把PaddleOCR服务打包成一个镜像,以便在Docker或k8s环境里,快速发布上线使用。
+
+本文将提供一些标准化的代码来实现这样的目标。大家通过如下步骤可以把PaddleOCR项目快速发布成可调用的Restful API服务。(目前暂时先实现了基于HubServing模式的部署,后续作者计划增加PaddleServing模式的部署)
+
+## 1.实施前提准备
+
+需要先完成如下基本组件的安装:
+a. Docker环境
+b. 显卡驱动和CUDA 10.0+(GPU)
+c. NVIDIA Container Toolkit(GPU,Docker 19.03以上版本可以跳过此步)
+d. cuDNN 7.6+(GPU)
+
+## 2.制作镜像
+a.下载PaddleOCR项目代码
+```
+git clone https://github.com/PaddlePaddle/PaddleOCR.git
+```
+b.切换至Dockerfile目录(注:需要区分cpu或gpu版本,下文以cpu为例,gpu版本需要替换一下关键字即可)
+```
+cd docker/cpu
+```
+c.生成镜像
+```
+docker build -t paddleocr:cpu .
+```
+
+## 3.启动Docker容器
+a. CPU 版本
+```
+sudo docker run -dp 8866:8866 --name paddle_ocr paddleocr:cpu
+```
+b. GPU 版本 (通过NVIDIA Container Toolkit)
+```
+sudo nvidia-docker run -dp 8866:8866 --name paddle_ocr paddleocr:gpu
+```
+c. GPU 版本 (Docker 19.03以上版本,可以直接用如下命令)
+```
+sudo docker run -dp 8866:8866 --gpus all --name paddle_ocr paddleocr:gpu
+```
+d. 检查服务运行情况(出现:Successfully installed ocr_system和Running on http://0.0.0.0:8866/等信息,表示运行成功)
+```
+docker logs -f paddle_ocr
+```
+
+## 4.测试服务
+a. 计算待识别图片的Base64编码(如果只是测试一下效果,可以通过免费的在线工具实现,如:http://tool.chinaz.com/tools/imgtobase/)
+b. 发送服务请求(可参见sample_request.txt中的值)
+```
+curl -H "Content-Type:application/json" -X POST --data "{\"images\": [\"填入图片Base64编码(需要删除'data:image/jpg;base64,')\"]}" http://localhost:8866/predict/ocr_system
+```
+c. 返回结果(如果调用成功,会返回如下结果)
+```
+{"msg":"","results":[[{"confidence":0.8403433561325073,"text":"约定","text_region":[[345,377],[641,390],[634,540],[339,528]]},{"confidence":0.8131805658340454,"text":"最终相遇","text_region":[[356,532],[624,530],[624,596],[356,598]]}]],"status":"0"}
+```
diff --git a/docker/hubserving/sample_request.txt b/docker/hubserving/sample_request.txt
new file mode 100644
index 0000000000000000000000000000000000000000..ec2b25b1f54cb7d7cdaeaff6162810a5772442c6
--- /dev/null
+++ b/docker/hubserving/sample_request.txt
@@ -0,0 +1 @@
+curl -H "Content-Type:application/json" -X POST --data "{\"images\": [\"/9j/4AAQSkZJRgABAQEASABIAAD/2wBDAA4KCwwLCQ4MCwwQDw4RFSMXFRMTFSsfIRojMy02NTItMTA4P1FFODxNPTAxRmBHTVRWW1xbN0RjamNYalFZW1f/2wBDAQ8QEBUSFSkXFylXOjE6V1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1dXV1f/wAARCAQABAADASIAAhEBAxEB/8QAGwAAAgMBAQEAAAAAAAAAAAAAAgMBBAUABgf/xABQEAACAgEDAwIDBAUGDAQEBQUAAQIRAwQhMQUSQVFhEyJxBoGR0RQyobHBIzNykpPhBxUkNEJERVJTVHODFkNV8CZWYoIlNTZjZEZ0hKPx/8QAGgEAAwEBAQEAAAAAAAAAAAAAAAECAwQFBv/EACkRAQEBAQADAQACAwEAAQUBAQABAhEDITESBEEiMlETYRQjQlJxsTP/2gAMAwEAAhEDEQA/AKOnzcbmlgzXW55zT5uNzT0+bjc83U4hvY8nuW8c0zHw5rS3L2HJujMNFPYmxMJWg7GR0HvuVNRDdquCxF7kZ4XuvKAMLVY7tUVI4anwa+fFu9iusSvgV+KiMa7YGL13NbhhT5ds3pqo36HktbN5tZklyk6X3D8WffT17VaaYaQxwphKGxv0i2rVCpfK6Za7Cvq40k/uHAX3EOfuKv3Och8A2wHIFyAlIqQddOQlu2S22yC4TjjjhhJyOOEEnEIkCSiUiEEhBKCQKQSJCUSuSCUInN0mLDm96AGbjjjgDgWERW4whIOKshIZBbitB2NFjGIgixAw0Z0CxDgrwHxexlTOTCsUmc5UQDHPbkVkyKEW2wZTSKOpzW+1P6l5z2laGWRzm5NhwnXkrJhpm1iV/HPjct4sleTMxzqi1jmZ6NpwmmWIOzNxz9y5jyGZrkRqK8JpjUyaqHI6iEyb2M6bm6CixTZ0WR0lzG7QyyrCVDVO/I5oLMJbDk7KkZUx8JJh01iA5MTDgaiouCTObOvYBvcKBWdYFnNkmJvYVJhNipy2JpFZJbFDNO5clnNOkzOnO5vc59+6ztOvYBsFSBcgxB0TYtshsBs3kHXSdiJ7ByYmcjWAExE2HNipM1kJXzCo8jcu6FpUyiXMD3NTA9kZGF7mnp3sjm8kONPC+C5BlDC+C9jdnHqNIswQ6IqHCHx4M+KEkEkciUPgcQ+AgWIAYDGMBoCA42LcRr2OaTHKSu0RQ5oGhkXRFDaIaKlBbiA4jqIaNpSIaIoa0DRtmpsDVHJBUC0dGamxxBJ1GvSC0QG0RQupRRFBUTQugNEUHR1FSgANBNUQaQnEM5ugGyglshsFshsmnwae4aYhS3GRkZbpmp7HNgp7HNnNrS3zrDkprc0NPm43MeEuC1hyNNbno6gehwZeNzQwZLrcwdPlutzS0+Tjc57A9BhlaHWUNLktLc0YpNCgRF7jm7iV5pwe/AUZ3tYEHLjTXAhYty09yEtxU4z9fF49NOS5rY8qsLTuS3PZa+HdBQ/Ex8mmp8F59KY6wtu2F8L2NB4WvBHwt+CujjPljrwUteqw37m1PFtwZHWEowjC927Kz7pa+MruIciGjjckNgu2FR1MACjqD7SaDoLo6hnbsQ4h0AOCcQaoA5EojclASUEgUEhBKCQKCEEkohHN0mIAbt2cccMOOOOAOOOCSAOirHRREIjkiLQ6KHwQuKGpGVMxDE6FoKzOgxPYiU6QtypCcmQJB12bLSdMpttu35CyTbYKN8ziUoJEIJIAOLLGOZWQyLoihexzLMJ0Z8JlnHMzsNoY8nuWYTvyZ2OZYx5K8kVXV+MrDcqRTWWhinfky0Omt2TF0wE7CT3MdULC4sJMVB7DER04YmPxt2IXI/Gi801uD2GpiYvYNM2izGwWyLIbGYrIbBsFsQS2JyT2JnKkVs2Sk9yLE1X1WSk9zP77Yerzcqyl37mX56ytXFP3Ocr8laOQPv2CZ9jprkC2LcyHI1kHUylsJkwmxTZrIYJMU2HJsVJmkIufIJ0mRYyNxumaGnnwZcXTLmDJvTMtzo62sMuC/hfBlaed0aWF7I49Z9tJV/Gx8HsVcbHwZncrPiEAmEmLhpOOIYcJDBYTBZNgC0DQTIognNWC4hE2PoKaOSGtENDlBbiC0NohqzaaIhoBrcc0BJGkpF0QyXsC2b50moOObORvKTqJolIJIOp4XRNDO06hdHAVQLCYLdFSjgGwWyWxbkayk5sW5EOQqU9y+gbkC5iXP3AeT3JtCx37hxmUXk9w8eUy2caCmS5bFWOT3GKVo4t3i4+dRssYk7FY4Nst4sb2PZ1SPwtpF/BNpoq4sbos44NM57TbWlyVW5r6fKmt2edwTUUrdF/DnT4ZAbj7ZR5TF9lPZmf8RuqZYx5pPZsOhYpolbOzoST/AFnRLj8radgSvlanJ7rbaivPGmMyY3bdNCX3xezH05S5YUxcsKQyWSa5Qmed8dodPpc4KMW3skjzHUJvPncktlsjc1eTJki48LyZk9P7DzvhX3WRLG0LcaNPJgrwVZ46fBtN9CrQSQbhTOUSukFRJURiiEok9BPac4FhQvwc4C/QVnGgHEtOApwKmgrtHUNcfYGi+gCCOSJoCcggUEIOOk9qOIfIBBxxwBxx1EgHJWMhHyRFWx8I7EWhKQaRyQxRM7TdFDEjkqCSJod4OJIeyJBeRlfJLahuR0ys3bNMxKDkckEiw5INApBIkJRKZCJQgbGVDYzEIJWibCW45GPhkooRkx8JEWGvQnbLOOZn457lrHKzHRrkXYxMrxkNi7ObRn43vQ5FeDpliLtEHDI8lrEitBblrHwaZVDkGmAgrNosVnNkENjCGwG6Jk6FTlQAE5bFDU5KTLU5bGZq50mHEarN1Oe5tWKWS/InK7yP6nJOx/mcYrSmMUyvBMakyOGb3HNgKwqsqKQ3YErGNASRcBMuBU2OkhMlsUCWwLDkqFspJkWOg2mmV0w1ImwNbS5baRsad2jzukb70b+nfyHNrPteV+DLEGU8b3LEGZ3LSLSYaYiLGJkfkzbOsFM6xWBzIfBNkMz1CCzjmcY0Oo6jiaEEJBpJnJBJDgKaBdoe0hbRYLaT4FyQxqhba8mmdETJC2Pkr4EyVG2dEg5AhI3mk0cdw6Big0i+k6iGgqIfAdBbQqew6QmfDDoJlITKZOR0ytkyV5LmiFKdCMmRIDJkq9ynky+5pNEfLN7inm9yrLMr5FPMvUq0Ljy7hwy7rcz/AIy8BwyNmej418eS+CxjkZeHIXMU7OLyQ48xjw+xdw4XtsOw6e62LsMKiraPSulEY8Xqthk6gtuQsk1BUt2Iacnb3IItzld2WdPntpN0xLx2gFFxewybunm3yy5H1T3MrSTlSVmhCTrcztNa76QKyu9mIlOlyBHKm+SoVi58RtU3YLVgQnY1NMVIl478CZ4b8F5RTBeOybQy8mBVwVMmn9jani9ivkw+xn+jYWTD7FLNhq9jdzYavYz80N2VnZsbJjp8AUXs2PkquNM6c67ABIfhwTyzUYpts7DieSSSVtnrugdH3U8kd36i1r/hydVNB9nZZIJuLbZQ6z02WgyxjKLXdxa5PoGs1uj6RpO/PNR22S3cn6JeWfPuqdRy9T1Us+VNK6hC/wBVen1J+fT1yeoy3ABx9iw4guJUqFSUPYW4FxxFygXNEquNHUOcAXD2L6CqOoY4kND6AENbhtEND6A0cTRNAAhJHJDIRthaE44eSwokQjSGxRjablEYokqIxIi0wpHUGkdRIA1QEnsMkqFTajFtjhKuaW9IUiW3KTbOSN56iXJEpHJBJCDkgkiEgiScSkQkGkAckEkcgkiQlIbFC0hsUTQbj8FrGV4ItY0c+zh8EPiheNFiK2ObSkpDoOgEglsR01jG9y1B7FLGyzGZrk4sJhJiVIJM2ipTbObAshsqKRNiJsKctxWR7BwrSsktmZOtns0aOSWzMnVu20NnpnqLbHQh7BQx+xYhAKgEMfsNWO/A7Hj24Hxxkq4qrES8deC4sfsc8fsB8UXBguBeeP2FSx+xQ4oyh7CJw5L84CZwKg4zskaENUy/khyUsiouJpbdDIO2Ka3DxrcdhNHRr5l9Tf06+RGFol8yN/AqgjC/VZPWzHQYpKxiTTJuWkWIsNMTGxqIuTNTJTATCRnYBWcyES+DHUAWcjmckc1CUgkjkgkqAOSDRCRI4ESFy4GNC5IsEzFSHSFSHAW20C6YTFsvN4kLjTOQSd7M6lyjaaI2C2CoHGxlGsoRQLQdENFygiXInI+R0+RE+BwKefhmdmyNNps0c3DMjVurKiScuar3KWbNV7gZsrbaRWdt22b5z/0CllbewKbb3ZFEpGhGxY6DqhER0DPQW8bplzDIoY2W8T2OXcNchiUFwBkl4Q+bb2SpCnD2N+qV+ze2d2V4LHaC4h0EqByxptUhqQ3DFNitIenx9tbF1R22QuMapFrHC0QqRnarI4Qe+7K2PM7VssdUxytNLZclCCaNp8TfrUxZdluWoZLMrHJplrHNqiaTShJMYnZTxz9yxCRjSMaTQrJBU9hqdoGStGdps7PDnYzNRjpvY280dmZuqhswl9mxMqqys4OU6SLmog+50i70npstTlTadWdUvIa10HpTyTWSa29z1up1Gm6RoJZ8u0YrZLlvwl7k6XT4tJpnKbUYwVtvZJep4XrvVp9U1dptafG2sSfn1bXv49hyf2q38xV1+tz9S1ctTqXcntGN2oL0RXSOSCSFagNENDKIaDoJaAaHtAtFSkQ4gOJYcQXEqaCu4kND3EFoqaBDQDQ9oBouUi69jqDaOSH00KNsdCFeDoQ3Hxj7Ea0ERiOiiFEYkZ2mlIYlsckFRBho5oLgGTSAiplLUZLfYnsuR2py9qaT3KSbbtm2J/abXIJI5IJFUnJEpHJEiCTkcgkhE5IJIhIJLYQSluEkckEkSEpDIIFIbBbk0HQRaxoTjRaxo59qh+NbFiKFQWw6K2OeriUgqJSJog3R2Y5MUluNRpgjExiYpOkSmbwzb2BlLYGwZPYo+hbsXke24disjVDJVzSqzOyK5F/Km2yp23IEUMIew+MPYmER0YioTCGy2LEYX4OxxVD4QEotY/Y5wvwWVDbg5wvwUao8fsKlD2Lzxi5Y/YAz54yvOHsaM8fOxXyQ9ioGZmhSexn5IbtmxmhaM/JCmXEVTcfY6Cpj5QoFKmFqWhoo7o3cK2Rj6GPBtYVwjFeViEdhiR0VsGluNaUg0iIxGpE0wpBJBUdRjonENhEMw3QElIitw4o5DEkEkckShwOo444oOfAuQbYtjBUhMuR0hUg6CmwGrDaA4H1IeAkyaOoP1wCT3Hp2hCDg6Ztjy/8ASMpkNDFUlaOaOrNlNWmitkVJl2aKuWDa4NIGbn4dGXqYOVm3lx87FLNiu9jSJ485nxU2ys0bWow2nsZmSDjJqjXNIijg2qBrcoCiNiLSoZEikfB0yzjfBUgyzjfBhuBuLH7AyxpIa3QuUti1lNJITOhk5CXK2BO8jcTp2KW7HQVIVC3jkm0XcdJXZmwaTsa8m1W0iOrl4nWtTtLcqw06rdBzyJKxa1K8sf6TRPAqOimmHHLGXkaqfoy+hEG0yxjlYtY74DUWmTqdLiwmSxaYzwZWEVNJplDUwtOjRkrRXyY3J8DmVRkYdI8+dRSu2ey6ZoIabCtt/JW6ToFB/Ekt3x7Iq/avrMtLheh0cqzzXzzX/lxfp7v9i39DfMV89sv7VdcWpnLp+kn/ACMZVlkntNrwvZPn1Z5pMDjaqoJMqo6NBoWmEmSBHHJnEhzQLQdHUPoLohxG0dQdBLiA4lhxAcSpolZxBase4gOPsXKCGgoxsY4B44Dug6EKSHKHsFGA1RM7TAoBpUEokpC6YUibCoFoAGToTOTpjZIp6vJ2R7VyyszpX0q5p903T2QKQKDRv8QlBpbkJBJEhKRyRKQSQiQkciUgkhdDkgkjkg4xJDkhiiSojIxEaFGxuOBMYDYImgcIljGhcEWIIx0o2CHRFRQ6JhVQaCohBGdNCW4aBC4ReQmzkwG9zrNoRtgzlQNsCVtlw0uQqbbGJAzVIoqrSVpi1DexvNhRhYkgjEZGIyOMaoewK47Ei1COwmEaZaxoUNyjvwF2hpBKIzIcAJQLTj7C5RHw1HJArTgaGSHsVskByEoZIWihkx7vY1skCrlx+xSaypwoUo7lzNDfgRGHzfeTWdaWijsjVxqqM/RRqjSiuCGmfi5CmhiW4nG6SLEGnVk3ShRiNUSYpDEkjK6kBbVA0MaIaMtb6AUQwmRRhrXQFLcNI5INIyNxJBIwg5nENgAtgSYTYEgAHyLkMYth0i2gaGtEULpBUSVENIlInpg7DlEakT2ocoBBtMekpK0LqgoumdHi8nPRBlATOBbaTVoVKJ35oUMmO72KeXHzsauSBUyw5NYGPnxWnsZGrw1bSPR5cfsZerxWnsXKmsJrwCkOyxqbQuty5UuSDRCVEoVBkfBYxvgrofj8GOibkpipz2Ankor5MteTSQ+jyZAE7YjubZYwxsL6B+ONjkjscNixHGZWrhShsc00h7SSM3W61Y7hDd/uCZ6fqJ1GRQg02rZQedXyVc+bJkk3Jv2RWlOSNJ4kVqLVKPL/AGljDrk3s9jzk5Sb3ZOPI4PZtF/+U4HssGrT8l3HkjI8jg1UlW5qafWN7N0Z3FhvQpJ8M5ppbFDDqU1yWo5b4ZHDFfhj9Ph75q1e4uEPiSSSNTDCGDE8mRpJK234KzBBarOtDpHKKTyNVBer9/ZHjdRpp5JynNuc5tuUny2/J6DJqFrMrk9vCXohctKpb0jVX15LUaBrdL9hmzjLHJpqj3GXRpqqMbqHTbi2luLvEWMBMNMVkjLFNwlymdGVjsI9MNMUmMTIpiRKR0QkRQ5IlIlImhdAaAcR1ENB0KziA4lhxBcC5QSoWx0YUgoQGKIroBURiRKiEkLphSCoJI5oYAyJNUS9hc2OArNNRi2/Bk5JvJNt/cWNblbfYn9SqkdGJydRUxDSBSGJFUhJBJEJDEiLQlIlRCSCSI6AqJKiFQSQugKiMSOSGxjYrQiMR0YnRikNSEaEhkUcosbGJNppgixBC4RHRRnQYkGgY8BrkyqhoIFMkzsNzYSewts5PcIQvJJyVnUbQOOaOONIEeAJvZjKsCadDCuluPxrgXGLssQg9mMGxhaDUAsK3pof2Kg4ZKhTHwQPbvY3Gtw4BVuHGNnVuMithqLa2FSiWGgGhhVnErzgXZRETiPhKU4lXLDk0JoqZY8gTLzQ3ERh8xeywFQhvwRUVc0kWki9FcFbTxpFuKEqH4VaofFOxGLZotRRl5J1Y436jVwBFDEji1CdRDCBlwSAs6iLCTFZ0nJEnHEqSccQ2qEHNgNnNgtgENgtnNgtkkhgslsgXQiiUjqCS3AnJEpHJBJAaEiUjkgkgAWiBlbANDl4B434YUoilsyxH5o+56Hh32cJWyQK2SBflEr5IbHXAzMsPYz9RjtPY2MkFuUc0Nmi4VeZ1mOm3RUo2dbj2exjS2bTKiHJEpEJhJhQKI7EITHY3uZ6JbyZBDk2yJO2TCLb4NgZii2y/gxitPiutjSw4qXBlqrkFjhQ9RSW4UIJIr6vMscaT3fCI4v4r67UKEe2D3f7DFnBuTb3b5Ls25ybe7YLha4NZ6SzsmPbgrzh7GpPHsynkh7FSkz5woU0XMkBEo+xpKSMWRp1Zfw5OChDDOc6Sb+42tB0rPmaSg37JWTvUh8Nw5Xao1dI5zaVb2O03Qpxp5HGH1e/4I19P07HiaeOSk63/uMf1KfB6PAopNoo9S16yZXgxv8Ak4um15fp9w/q2remwLBhdZJrleF5ZhQT2K+CrsObWxpaWamkmZmHwX8Kppi/RxfenTVoo6rSJp7GrgdxSYWTEpLgOr518765oGk5xVNbnn4y9z6P1bSKWKSrlM+cZ4PHnnGuJNF4vfTKzlNjIbGRVixsZDuSWkw0yvGQ1MysM9MJCkw0zOwDpENHJnNiAWiErZLDih9DlEJRCSCSF00JBJEpBUUYKIaDaAeyHATIq6jIoQbfJan5MnWZHOdJ7I0xO1NV23KTb5ZyRyQSR0dJyQxEJBpEkJIZFAxQ2KM7QlINIlIJIkBo6hiRPaLpIih0UDBDlERuihkYnRiNSJ6aYwGxidBDEhG5INIhIYkTQlIk6jjOmlOie8GjqM6QnNe5yasVLYFSpkdLq0mSwIu0d3GuaYyVuL7g47s1hjSO7LDhG1wNjEuAhYqd0OjBDFAJRrwUYVGmPS2AS3GpbIfACgocktHRVMDMrcYlsAuQ0hhzQLQxoFoZkyQjIi1JCJoCVZorZY3ZelEr5IipM3JACEN+C1khuRjhuRU03FGkPigYRpDYoSoKOzsuY90VUixifCFqdij0g0tgUrDOTcCHwC+AmC+DCkU3TJTIkAnuIj07JATDsVhuIbJbAbIMLYDYTYDYqSG6BbObIJDjkcSlYglINI5IJIAGgkiaOGEUSjjkhhKRDVhImrQGVVMZjdMFomDpm3i1ylTpRtCMkS0t4iprk9TJKGSPJTzQNPJHkp5YqmawMPW4009jz2ojU2er1ULTR5vX46k2kOfWdUkwkxaYSdDpGJ2NxvdCUxkHuiNA9Jt0XMGFunQOHDbto0sOKkth60cgsGKq2L2KFLgHFj9hzahG2ZNJAZprHDky815JNssZZvLPnZcAONlT0FXs9jnDYsOCSAmkkMKs1syllStl3I+SnNNtlJqrODb2LGj6bk1M0oxbv2Lmg0Pxp3LaK5PTabHDDjUMcUvV+WZeTzfn1DkVND0LBp0pZvml6J/vZqRrGuzHFQj6LYNJqF+oF03fhHHrd0sUWlswlJpqmKW6JttilAs2n0+pd5ItT47k939RD6ZhrbI0/eP5D1LcNyvdfgbTdKqa0Tg9pRa9nX7xuODhVtfiFJ7i75L/AHSXMWaMOWizHVYmqt/gZKYakH7sOVc1EIaiDjFpv04PAdZ6NnwanLN432ttp0e5g73sbUckHjyRU4PamrQTzXNFnb18kljcHTRKdHuOtfZmM4yz6JXStw8/3nismN48jg0006aZ153Nz0mzgosbFiExkWFiVhMNMQmGmZWGcmTYtMJOyeAa3YyKBghqWxFMSQSWxCQSQzckTRKRzHAFi5cDGLlsrKgVdTPsg65ZlSTbt+S5qJOc36IS4extn0XCVE5RGqBKgV+i4BIYkSo+wxRJ6XERQ2KISGJEhKQSRyQaQg6jq2CSCS9hEGCpliC2FxjuOhsIxpBpWQqYaW5IMihiQMEMSBUcluGkQkFZNCTkjluSkkjOmitgWE2A2Z1IZMU3TDkwGZkfjlaDe6K+N0x1muKHDMbdi7GQdM2hruGmhqjuV8Mi1FplxUSlsSkEcluWHKNsZVI6KCZRhaQKW4TOXIASW41IBLcYhhzRDQdEUBlSQqSLDQqSAK0kIyRLbQmcdhFVOcPYiENx8obnRgTYTlGkGkEkSkTwOSHQ5QpKhsOSapZg9ggIMMw3AhgvgIFo5bCLmthfDGtCmqYiHFhp7C4sO0KhLYDZNgtmdMLYDZLYLZJIbOOOSJDkhqQKQ1IIbktiaJomigGjqCaIAIomjqOoDcEuAUShwOaBWzDkC+TTP0U7G9iZoDGxr3R6XjvYlVyRKmSHOxoSiV8kPY3gZOox2mef6lg2bSZ6vNju9jI1uDuTVchU2PIuDTJpl3NgcZvYV8Pfgf6QQkxsE7C7A1CmTaG7hw1Tou48fGx2LHxsWoQSVsn61kcoqCtlLUZXOTiuEx2pzf6EXv5KqjuMV0IBNJBpUrBkAKkV8jHzYie7oCtV5pt0kN0+geRqUlS9SxpdL3y7pLZGpCKpJKktkjPfk58JXw4VBUlVcGhhg2k3x6iEqkki3Dwlx4OXV6qGtqkk7oTJ0n6tpf8Av8B3lX4FZIUt/WyDCpUqOnJKKpbt8+iBStr08nNqTTTHPQFB7DE1W7S+oreqRL3W/gqUCat7CpNXQabi7TByKm2zWeyAEmqpANtkwTKI/FzbZZTpor49hyd72RqdUen9x8++12mjh65kcIpLJCOSlxbVP9qPoFppU79TxX2wTfVoeawR/ezX+P6tgvx5dbMNM6caYKdHX9ZnJhpiE6GKRNhnJjI8iIvcfBbGd9A6I5IVAcjPhjQUQUEhmkhknMDAyvqZ1ClyyzLZNlDK3ObfjwOBX7LOcByj7EuGxX6HFfsJUBvYEoh+iLUAuwYo7E9vsL9ClJBpE9pKVDlQJINIhDEgDlENQ2Jig0g6ApUSlRLVHAQkxikJQaewjWISHJlSLpjovYRnWSmLTCuvJJmJktinM5zIpdE2C2C3ZDZnSQ2QzmyGZ0kxe41OxCdBplZoOTCT3Fp2EnubQ1nFOmi7CdpGdBlnHOjTI7xcTDiyvGVjlI0V05NBXYpMK9ijSyUgVuw4oOgcUNSASDQ1RNHUSiSjLaFtD2hbQAhoVJFloVNCKqrRKiMcdzkiEho6g6IoQQFF7kM5OmTVHxYxMTB7IamYahDIfBKOZz6gKaFyW45oXJGJAiFYLRAf0BNgtnWC2Z00NkEPkkikhchpEJBpCMUUGlsQkGijdRKRxwwhkBUdQwA4JgsOE45MiyU7HIBWC3ucCzSQxRdMsxdopp0yzhdqjr8N/pImthU4WWKBcDthqGSHsUNRhtPY2Z47KmXFa4Ck8zqtMm26KE8FeD0uowXexm5sG/BlfSLGO8dHLHuXp4a8AfC9iLpPHosUElYGoy9ipcsLLkWODKVucm2ateoSbdvew0g4w2CcaAAapCpDZCpACp8k4cDm7a2XLCjBzkkkXsUKSgl9fcz3vkLjseNNJJUlwOhBxW/kOMFBK02wlb3s5e9+mrJNNvyWcLSSTFZF89LikxsE2tgohydsjLODpKS2VMVKV7J7fvBeyF8PostRikmnau16C4LYnLVQS5r+IMW6YX6Q7aYSdqxSm3yEpMQG3aoGdtv8Ak4xdy2fhLkU8iT4b+81yEqL9AorcLFOE0+U0cpK9lRqRidLZBJ3yAmgkrYudM2LPLfaPF8fqM5x3UYRh+C/vPTO4Qb8+DD1OFubb3bbdmmJ+Q8nmwSTexVnBp8Hpc2nTvayll0ifKN5SYlkpl7Jo0m6Ql6emV2JRj3LURUcbQ2KoyvszoDkKhwNiSoaCQKJEBHAnN0m34BUJ1E6VJ7srpBSl3zbJSEEKJPaMSJUbJtMnsCUB3Z7Hdov0RXaT2jVAJQCaKq7iC1Q9xFtFSooE9xsGBQUUaEfFoZETENMXAN8AnWcMOCTAs6wI1NBqVFbuZEs3Zt5FwdXHkSRHxGygszb3YxZLDhdXFMLvKin7jFMzsPqx3HWKUgk7MaBnHI4zoQ0ctmS0dQ8gSYabYuIcTbIMi6HQl7iIhxZtIS1Gb9R8JtlOLofBmkHVqMg+5sRGXgbHkVq4bEbFiUw4sjqj0GhSYSkaSnDVySAmGmaRSGrBaGAtACmhc0OaAkhAhrclLYOiKEktoig2gGqJoCwWwpAMimbCXA6LKkHuWIOzPRVYRzIjwSc2wFoBoYwWjnoKaoAZJbC2IkMFslsBtEUJOBslMgDSGRQtMYnuBmJBIFMIo3HEnDCDiSG6AIYDZLe4LY4SLJTBIsuEbdgs5M5lHAN7ljC9xD5GYnTRv4rypq6laJ7SIMNLY9HM7FwtwEZMfOxcaAlHYdgZWbF7Gfnw87G5lgmUs2PnYy1E1h5MVXsV3jp8GtmxV4Ks8e5z6RSp5Hlnb48DMa3EQRZgjoVD4LYlrY6DpHSaAyZIU4tvYbJ2M08E5W+EK3kBunwqMG3y/2B6dVl39Tm6V8K6OTcZJ+5y6ttB7VvcjjwMcGla45FN2zO+jBNXNP1SGNqEK8vn6HOOyk/Gwmc23uvvKpJbV7uvY5StpeoMVb3Z04yim0t3shSdI2MHkba8C5PtlXk7DknFdtk0q726Xq/I+AuMXJ7Ic6ikk7fqLU1dJUv3k3u0KhNb292DNWvcN290RV7tbFZFRh+WL9WGuQU/YZFbbmsAluPhG2LgrfFDmnBG2MdMGZOrW6Xgz9RFNX5RpKaunwVtVhTTcXya3IZMsalwJyae72Cc5Y5tNPZj8eSMlyhQRl5tM6exUlgab2N+UItFXNiVPgo+MSWOmQolvNBJiWiKkMUNQCW4aJMaOORDaFwJvcRqMlLsXnkPJNQi23wUXNzm2/IcB0ENSFwHRVkWrElYxROhEcomN0C1AnsHKJ3YR+gUoBdtIYoexLjsOUK00qEtIsTRXa3Nc1nQ0ckjmBbTNoR8Q1wIjJjFMZDbIbBsFtoAO9iLBTsDJkUFtyPhW8FkyKCpbsrNtu2C5Nu3ySh8SKwlJgpBJADYzGxkISGRM9GsRY2IiDHxZhRDEEgUGkZm6jqCo6ghhSDRFHV4N8lRpoJMWkMijaQjFwNi6FKkglIslrG99xyZShPcepoy3VSrHfQSmVXMF5kvJh+l9X1k2CU7ZmrUJurHQzJ+TfOhLGlCVjYso48ifks452kbSrlWCHuCmTZZoaAaDbRD3EC6IaDoholJTQDGtC2hUFSFsa0LaM6YU6Y/Gyve47GzPRLUWGKg9hlnLuhwLCIZhQW/IqWw1oTk2ERcnQDkROW4mUyKDu73OUiv8QlZERyl1bUhiZUjMbGYlLaYaZWUw1MfTPtHX7iu8jvK6DWwXIW5kOQdA2wWyLOsqJcccSkXA5BN7EJEtbC6YXyFB00C0FDZm3jvtK7B7IchGN7Icmer4/ioIhok42s9Gr5EVcsbTZemtirkXJjqCs7MuSpNJ8ov5o8lLIqbObeWdZ0Gh8GinjmPjP3NuHKtKdIBzFd69QXPfkAcnbSLeOPbD3Ken+af0LabSZj5aYnNdiTfDGpwcLvcqZG+2l6jdO22kzEursZpQ7WrdKmKa+ZUdGXdJvx4JnLsV+Xt9xP2mjJNJJK6X7xE5Jpvcmc2luhE8ndshptNhk4SSLGO8kHa4K2KLpbDpzah2wdp8jgA3HG3St/sQuTc2m3Zz7m9w4xbVJWxhCTfgNQk+FwNjCKVPd+gxb7cL0QvUPhaSiqk69lucnF7U6GSSfhEpKg6YKjXyp37kxg29wkqdoKN3uP9A1VDC37pHKSkqFa2fZom/LaX7Srh1F+Tq8e5JylVjIux+xDfdBq/ATmpwabKiy9mVwb54NrofWNqpzx6ia5V7CPjtO0qZe1+LubaW5nVZERPXo9a2aW6sDJre5U4NA9guWL2BcJyZO5tgW2xjx0R2UKmhINIhIIkIbpC5S3CkyvmyKEGx8K0jV5rfYn9ReMQm5zbfksY1wVZyFKt41wWYIrYy3i8HLtcNhEcogwQ+KOfVUFRontQyiKI6ApAyVINgT4KlKq8xEkPn5ETNsopTAb3DkKkdGUiTDTFRCTKI1MGb2ITsHNNRj7gKGeRQW27ENtu3yA2222SmXEDXIxICCsfCNgblEJQYyMBij7CMlRDSGdm3ALVE0xQHQER5Hw8GGoDojEgIIdFbGVORCRKQVAt0gydgXSIvc57s7g6MoqUEmwUworc3hDTbCSbOirGRiTdAKTRNtDO0hxMdU5CpZJJFfJkk/JZnEr5ImR8VpZJJ2mNw6tppNiZoS072LiL6b+n1CaW5fxZb8nmNPmlBpXsa+n1KaW5tnTTOmzGdh91+SljyJpbjlkXqbdayn2dYrvXqSpr1AdMRDYKkc2InMBom7OfBNoKkqEyHyEyMrQW9gsbpgMhSp0ZapLsGNTsq45jlKzk3ozbIZydokzMEivk8lmRWy+Q4lUyvkp5J0WMzasoZ5tDmb1Fovjb8hLKn5M95KfBHx2vBp/5o/TWjl9x0cl+TGjqq5Y6GqT8oz14ac02Fl9wllRlx1Ka5GLUJ+TK+Ormml8U74vuZ36QvU741+Q/FP9NFZPclTvyUI5vcbCdl4wOrilYaZXhKx8WVYY0GlYMRkUIOSJa2CS2IfAqotkLZhSIXJt4/qKuY+ENTFYv1UG9j1cfDhqOATCs36oM+CtNFmb2K8zHVCplXJRzR3NDIU80bsx37RXmoTXqMWTbkpKYXxPc34jq58S/J3xPcqfE9zlk9xcHWxoZKWSvVFzJ8sbMfQ5Wsya9TXlNThsc3lnKuUt7pb2m2PxxqDa54FY43Gq3THxi6S8WZUjcKVW/HIOR97brYJukor7xbe/JBgcbdNELCk+5ukOSSVtANPI+H9C4XC8k7+WCpefcLGm1XqMjihHebS9vIfeo/qKvd8itg46GBtW9l7hNKCqK29Xyxbm5Ldslzn4b+/cP1DTzvwSpNHQkmt4/hsEpwXMH+IumG29mHBMJODWyd+50Y27YreBMUm95V9UM7Uqpp36Edqex0mscHKTSS5HL01LqmbaGBPj5n/AoY5tPkXqM/xs8p3s3t9BayUzPW7dItaWPI/UVqotpTXKEY83uP8AiKSps68atg6VKay4/fyUMmOpMtbwyNeGdkgmrRvCVlElwtBpBUBxWljFOHsXXEXKHsKqUmqBeyH5IUV5sQLk9jN1eXul2J7LkuajIscG2/BlW5Sbfk0zEWmY0WcYjGixBcE6Czj8FvG+Cpj8FrFycu1SrcCxErY2WE9jl0rojjrOJHUMXMY+BOQrJEzYie46YmR05iCZeRbe4cxfk6ISbOsglIYSnSsTmlfLGTdIrZHbCFQ2EmAuQ48lkdBF3FC0injW6L+DdCpmKG3BPb7FiMLWxPwn6CCuoWBNUy44Utyrme7QqZS5LGNW0V4clrEuDHYWMa2G8ICOyCbMFxLYt7snlnNUXCqGqIe5zYFm+YimJbbhxpCE7Y2JpSPh4HREwGJmWqZiRzSoHuIcjOyqRNIrzSGzkV5uyplJORFdofJ2A0XMkUlTH48jg9mLogfA1cGrTpN7lyGdNco8+p0xsNRJcMcqpqt9Zk/ISyr1MaGrfD3Gx1N+R/o/02Fk25J77fJn49Qmqscsl+R/pUvVxS9znIrqfuGp2TaYmxUmE5C5SMtUAkxMpU7CnIr5J+5z60m1ahkXqWYTvyY8c1Sqy7gyX5OTdomutOLtBplfHOxyYSrE+CvlWzLArIrNsTpM3NHkoZ4WmauWF2Us0NnsdU8fpnWNmi0yu3TL+ojyUMipsJOemYXIHva8gNguRfAcs04vZhLVzXLKzZF2P8yhdjq29nY6OpbM+BZxRtmes5gX8eZtF3DJumUcMN1saWGGyMbJGkWcSbRbxoRjjSRZgtjKtDIoYlRCQRNNBzOIkQASIXJz5Jitzo8c9pq3iWyDa2BxLZBvg9THw58CnTDTFPk5MfeGObK82MnIS3ZnqmXk4ZWmrLM+CvNGeql4VSIeQU3SFObvk7eMVr4hPxPcpPId8Rsf5HW50rU48erg8iTVq79D0HwXCbS3i+GjyOgwzzZV23zyez0EksUMWR2kqTZy+bHv00z79Bxwp+eS04dqT4LkNPBNNJOwdRicpUuEYaxyL4zZRbfqMhh+W5NL6j2oY1vuytklOb8v0SRl8J0pwi6tt+iQt5W3SVL2OcG93F/gCoO6apB2knu3DW68L6sBppktNonhOU1foNSTSdiXB1xuMhBpchw4NUkSqYNV7nbv1DhmJK9hkbSFQtPcct1uAEm6syOr6pxrAnu95eyNrHDudM8nrJvJrs8m7fe19ydFc9dK30FPY5tkRCMkOUqY7HPcrvZhQnub+LXBVxx7o2uTou1T5RGKVqiZLtla4Z1w4XKNM5IdJWrAoZgohxsZRzQlKmSGxRzKmzUyRtGZrWscG/IFWLrcnfPtXC5ExQU4tybfkmKNfkSOCHwQqCHwRlqg7Hwi1jK8CxA5tqixB0PTK8WOjwc+jNTslAomzMJb2E5BjFyNMxNJkhM0OkKmdOSV5q2La3HtJsXNpG0AEqOcq4AcgWxl1E3Yp7sJsFjhBGRW4KQcSiNhs0XsLqilj5RdwqyTXsc9h3ekitjdIJypCNOTJeyKeR2xrbK83vQGPHyWYNIqRdDoS9DLUJbUwlbFQtvcamkZ2GYlSBbIcgWy85K0LYDZLYDds6c5RRpjYsQnTGKQWCLEXQ1S2KykF3UZWKPcwHMU5sFzDgHKd+RUnZDkA2VISWQyGyLGHNAs5sCTF0wydAfEp8gzkJnKiQtxye46GT3MxZGnyWIZL8kk0YZK3TLWPPeze5lQyDoz9wDXjlvyNWTbkyoZmvcsRzprncm1U0vPJtyLlMR8VeouWVGej6bkmknuVcmS/JGTJfkrymYWdqbXSm07TLemzXSvcz2zoZHCSaew7j9RPXpMOS1yXIStGNpMykk7NPFO0jl+XjbN6tp2RNAxYxUzq8HumqZIFPNDZmnkgU88Nj0s59M6xtRDkzM8KZs6iG7M3PC7M9ZZ1mTVMCx2WNMQ1uTA6zkcluMjBtjoTji20i/hhwKw468F/DDg597VIbhhwX8MdkIxQ4LuKJhdNIdjWxYghUEPiqM+qGkSQiRKQc+CQZDzCoGtyYLc5h41bOnxz2mrONUkGwYbII9HPxU+FSAbYcxUnsTokSdoW3sQ5bkXZyXfsOk9hTQxgtWRdh83nKlQlsObti2exHOgbp8TyTSSu2BFOTpG90rSKEFmkvoLV5Di7o8MdNiSpdzW7LX6Q4JteCtOV8C2273Oa6n1rn60undWlPIoNNb+Xf3G3k1D7VS3Z4jHHLi1iml8vLbeyL8PtFppNQbbadW1S/EjXdT/ABdHk3nk43ZOc23bX0R3w3VuTf3mbDqOmyQtZOyXuPx65zjTmpJPk4tW5+sVlxl4b/E5d6VNdy9yMeoi/Kv1Y2+5XyvVEf8AoAKMKdxa+m5zhBJNNtPykNST2fDJhBQtPdMf76fCU41tGvd7nKUrq7X0DcU22qa9ga3D9UhRipJWmvoH2wW3dX1REPU6St7D6Y2oJXd/RALm07RyVqrIUWraY+9JZxOmjxuSXfnyT/3pt/i2eo1Gpjg02TI3TUHW/LrY8pB7JFX1lOjY8B+AUEZVIZIWnTHPgRk23KzRVnFP3LSqUTNxzpl3DO0duL2CU6D5TIapnPZpoJ7qzRQCUcyFyJSJrYxtenOVLhGzmlUGZeWFttgVY2THuKSpmhmx1exWlCmPpBithsEAkNitiLQbEdB1QlDYvgx0FmDHxK2NliHBz6M1cHApkpkBL4FTYbdCps0zElTdJim2xrVgOO3BvKRT4EZG2WJqkV8iNMik2yGgqINSA0RQTRFCJFBIgJKygdiW5dwlXDFsu440kKnDk9jnujkiaEC5cFTI9y1ldIpTdsAOLbZYx0irF0NjKiLAuRnQantyVFMNTFMDqx3nOViVImzXOeFaJsgizkzRIkGmAmdYrQYmT3e4ruOciOGY5+4LmLcwHMRmOTIchTmC5i6DXMhzEuYLmL2DnP3FSnYDmLcxAU5CZSs5y9xbYBzYUMlPkU2Qm0w4F6GS/I/HkM6E2mPhNisJpKdhKXuU4ZGh8ZpoiwH/ABH6shzb8gJnMnhJcmLbYTBbIuTC2A2EwGxSEt6LNT7Wzc0+S0jzGOThNNG1o81pbmHmzy9Xi/024OxiZWxTtIsRdoXh1zTWibtCM0bTLCVoCcU0ex472IrH1EOTNzQ5N3UY7sytRjpseozsZGaG/BVcGaeSFsrvHvwc2vVJVhjbZaxYuNgsePfgt48fGxhvyKkDjx+xbxQqjoYyxjhucutdXIZigW4RF447IsQVGfVjiqGoWg0yoY0zgLJTKMT4Bb3JvYBvc0zkqkZjVsUnbH4ludPjiKsxWxzJXBDO2NP6KnwIm9h+R7Mp5J8onXwqBu2Gt0V+7cdBnneT6QmgaGNWgWjC0PmD5Bq3RL5H6XTyzZEknufQ94wix0/RvNNN7Jbtm41SUI7JbIjBjWHEscVv5fqXdLp+6SbWxy70uTpWHSTyOqe5HU3pumadZdRJuUm1DHHmb9vb1Zp63VYOmaOWozv5Y7KK5k/CXuz571DXZ+o6uep1D+aWyiuILwl7fvH4/H+vdXqzLtbrsurm20oY3xCL2S935EJ0BZ1nVJJORj1rdNzY5zWLPPtT4k/D9/Y159P1GOpY22mrTTtNezPKKTTtM0+n9Z1OjqCffi8wk7X3ehy+bwXXvKpZ/bYhqNThVNdyXryWtN1SKlWTug/VcEaTrHTtWuzK1ik/GRbX7P8AMt5Om4ckO+Oye6a3TPN3jn+2eNJL/S7g1Smk4yUl61ZdxzU1TSTPMz0Gp01zwybS3+V/wLWi6o01DOkn/vIy5Z8V3/raniafciFBT42flDceRZIpppprZrydOBUPhLg0/wC8hbOiZJqRFpcumOASTTtIRq9Rj0uJzyOl4XlkZtdDC+y05enoHhyafWUs2KE6WzqmvvOnxeOa+peV1euy6zLcl2wT2j/EiB6bWdAw5oOemdSrZN8/eYGXS5dPNxyQar1Q/L47lFlQgkCgkzlpJasTNWhwMlsEoqpumWcM6pMVNbnRdM6fHomnGVxJi/BWw5LS3HXTs6VSjewKe+5zdoCcu1N+QUVnncqXCESVoK222/J1WCVXLBNPYp5MdM1Jw2KuXH7CUoVTDQcoUClTIpDQ2PIpIYuTOg+BYiyvAfEx1AajjkcZhDBasOjqLlBTiA1Q5oVLg0ySvkRWyItTTK00b5STRDQVEM0AGCwmD5AOQ3HG2RCDb4LmHFxsUOGYYcFuMKR2HHSLCjSDiiWqIaGNC5OkxEraiVKii5bj9Tk3pFS7YxTVINSEJjE7FxJykMTEJhplQj0wkxKewSkUDbJsV3HOZNoN7iO4U5kOZINciHMU5gOQga5gubFtkNhwxuTIcgHIFyDgG5AOQLn7i3KxAbmC2DbIsOBLZDOJAAaBYxgtAA3Q3HMVW5y2Cku45FiDooY5lqErrcmhbi7D5FQaY1cEU3NC2hzQLRFJXYLdDZIU0SQWy3os1PtbKTsiM3CSafAaz+pwPV6bLaRfxys8/odQpJOzZwztJnDqXGm2b2LyaOfAuMg7O/w+aWCxXyrZmfnxppmlkWxUyRtHZ+pUWMjLjp8CHDfg08uO/BWePcw8ntMIhj3LGOBMYDoxPP8AJ2NIKEdh2NUBFDUqOdfDYjExSdBphAcmTYtS2OczWGZ3HKW4lzOjO2bZyD29iLBTtHI3zkhx5LWJFeCstY1sjfETPpy4IZKIlwdLUjK6RQzSpl3M6RmaidNka+JqE7Y/Gynjlb3LMGcHlhLKZLQtMYmclD5ljxuckkrs3tFp1gxptfO1v7IraDSqCWSat+EzRgnN0lye75NspDtPjeSSVbGr34dJp5Zs01DHBXKT8IVpMUcWNzm0klbbdJL1PIdf6w+o5fhYG1pMbuPjvf8AvP29F95jjH7rTv5it1nqmXqmr+LJOGGFrFjf+ivV+78/gZzZzYLO2TjG3qWzrBvcmxhNkpg2dbAGxk15Lem6hq9K70+ecPVJ7P7uCgmGnZnqS/Tlr0uk+1OeDrVYYZV6w+Rr+DLmfqXR9VgnlWoWHMlahKDTb9Ntn9x49cBJnNrwYt7xf6r2PSOrYlNY1kcovw9mvpZ6iE1OKaaaa2o+Uwk4tNOmbeg+0Wt0kFBqGbGuFO019GjDf8f33K5p7TVZIaeEsuaahjirbfCR5XV/aSWbMsWix9mNunkny/ovH3lPrHW59TwYsbxfC7W3NJ2pPx77bmXpd830Rfj8MzO0rpqx1E3O2223u2+TS0mplBppmPjTbs0dPBuib/j8R7eu6dqviQSb3LWo0uHVY3HJFO1yuUYvTpuDSaN2ErSadpm/i8ks5Ws9x43XaSei1MsUt0ncX4aEI9j1HRQ1unaarIk+x+jPGq1s1TWzXozl/keH/wA72fKjU4KznwRZDZypLmLsPI9iu5U+TbCas4p00W1O0mZsJOy1jntydk+HFhT25FTl3beBUslOrJi7GpLVExJq0clQzS1aFZIWh0TnGyTjPyY/YQ4UzRyQvwV8mOiaKrpErkJqgVyZ2JPxssQZXxj4+DLUM5HApk2Z2AVo4hBUEAGhbQ5oBo2zCVpxK+SJcnB+gmWNvwbZLim0A0y28Lfg5YN+DUuKfY34DhhbfBdjg9hsMG/AHxXx4eNi5hw1Ww3HiS8DkkkM0RikiXRLdAsAhorZ2op7lmTSW5la3OraTAKeedzdC0C3bthIE0xBIFBICGmGmAkGVCokzlIE4fAOyLBvY6w4BNkNgtgti4Y7BsBshyDgG2C5AOQLmSDHIBzAcrIsQE2DZDZDYAVnWDZyYgJMNcC0GmATRzRKOJoLaBGtANB0gp0x+OYiqCTaYWE0McyzjlZnY5+5axz2RnTXE7RzVi4SsYTTBJCZIsNCpIikrSQtofNCpIqUjdHmePIk3s2ej0uW0tzybbTTXKZtdN1HfFW90Yefx9n6i83lejxytDU7KeGdpFqL2OOWytnSViMkOSz4BlC0dXj89nqosUMkLK8se5ozx+wiWPfg6pv9J4qqFPgYojFD2CUDHeenAJBUM7H6AtUc+vHVORKdAWQ3RExTNcgHP3FudC3M2zkz3PbkKMtr9So52Pwu2jpzkluPASQKXA2Ks24Dca4LUFQjGuCxHg0yJPYgJMNsXN0jRdVsz2MrUS3NDPLZ7mTqZfMTUJxumWYSKcGWMcjk8kC5F2hiZXg9hqZwaDy0FNtJfga2i09JOS3E6HTNu2in9oeq/AhLQaSVZGqyzT/UXovd+fRHr5l3Uz1O1T+0XWFqG9DpJ/yEXWWae02vC9l+1nnZMNpJUlSXCFvc7MyZnIzttoWQEyCiQQ0ScBoOs5o4QTe4SYByZNBqYaFRGIimYibATJsngS2N0jXx1b52ENjNOn8RVzYrPQjcw41fBo6eKikqK+GKci5BJHFr3Vxf07uq5NrTu4IxNI/nRvYUuxNehXhz7XDVweG1tLXalRVL4sq/FnuVweL6vgeDqeePicu9e6e/77Or+XO+OJ18VDmzrIfB5bMuZVmty1NWhM4mmaRadD4TryISDTo68Xog8rtJoLFPjcVN2heOdOrLsXGnFpo5oXilaHVYjQmGlYDQyKtAAuNickNuC1QLjfgXDZ08b9AFjd8F+eP2A+HXgmwuERg14GqLSGKHsEoGdyC6YSQzsCULIuDLUbGKFhxh7DVjHMAjsI+GWuxehzgvQ0mRxUeNegDxr0LjhZHw78GkhKaxW+Alp9uC6sVeCXBJFcCmsaXgJRSGSQND4HHHHUUbiG6W5zaStlPV6uOOLV7gXA6zUqEWkzDy5Xkm3e1kajUvLN77Ck7DnCtOTsZEVEdEhAkg0iEg0hwCSJOSJaLhIIslgtlcDrIbBbIbACbBcgHIFsRibIcgHIGxUCciG7IOJ4HWc2RZzEHWQ2cyGAdZ1kEpgBphpi0w4k0GImtgUGiAhrYFoZRDQugpoihjRyQ+k6Ow+EmhSQyKZFC1Cew+MtipBjoyI6DrtANEpkvcmwyJLcVJFpxsXKHsECpJDdFlePMlezJcAHGnaNJP1OHx6jSZbitzRxytI8/03O5JLyje06bS2OPXg1+uSNcnrdBUHHDNJNpkpNco6PF/A3v76FKcLeyDjo3JW0l9RmNpTTaLq4PS8X8HOf8Aa9DKyaRwfH3k4dL8S7aVF7NYiE1jbbNp/GxKXBR0WOnbbfsIz6FX8s69mPepaultQtZviNNO7Zp/44s+Gqrp2V7qmvqKy6TLjXzRa9zbxP5UE0mt0Z3+L47PQeWyQaK03TPT6rRY80G0kpeK8nn9Xpp4pNSTTXqcnk/jfn4Fbvt0X9Km6ZmwTc6NfSwainRlM8oWIxsdGIMIjYoqgzGhy4FwQ1cF5PLmIyPYa3SCeFONN7mucXXw6yNRKrMjNK5M29bp5pNpWvVGDmTUnZGs2X2kcJD4S3KMZpOrHwnfk5fJAvwnsNUiljn7liEzz/IXVLq2vXTdOsOBr9Kmtv8A6F6v39F9543Im222227bbtt+pc1GSebLPLlm55Ju5N8tlXIj3MyScRq9VmgGh0kKkadSBnBNAtB0IOokgXQg5nHUHTRRKQSQSQdAUqGJHKIaQgFBVsEok0IAou6DC55U2tluxGLG5ySSNrT6f4ONJr5nu0Rq+uBbw82WY+pGkwvJSHzxKM5Q8xOa44cHgyKMkze0mRTxqnZ5uDaZr6DM4tJvcM/46lXmtZGL9pNKp6WOpivmxOn7xf5OjaTTSaByY4ZsU8WRXGSaa9Uzu/P7z+f+qseBvc6x2u009Hq54J7uD2fqnwxB5Gs3N5WLnwLmhjAYoCWqIsOQtnT46SJPYR31Ma2ytlbTs6YqNLBPguwdoydNO6NTA7RFUY0FBbk1sFFUwMajaOcBkVsS1sBq7j7AuC9B7QDVAC+yvBKiHRKRPCCohKKCSolC4ExihiQMQ1yEgdR3bZKVhqJfABQ9jlBIalRDQzLaSFz3Q5oFxtDJVkqBobkVCm0luxhAM5qKtuivqNZjxJ7psw9b1RybUGVJ0NDW9RjjTSaswc+qlmk22VsmWWRtttkJ2X+eJuv+HJ2NiIiOgyNJWIDYioDYmYNihiQuI1IqElI5koiRZBbFthMBsoIbAbObAbAObBbObAsRis4EkmhJxxwqHEEkEhBxJ1AEUdRNHJC6HLkNApBJUTQZENARDRNAkdRKJJAKJSCSsJR3J6QVEYokqI1Im6AEgk6J7TqojoEmGmJuiVIqGejmrQtSDTtDgBJewpwse0N0OmlqtVDHBbtm3jn6vF591p/Z7pkss3lmmoLl+vseshjhBJRikl7A6fDDT4IYoKoxVIcevjxzMaW/8RQjLFLdFgFxT5RVhKij86otQ/VViZx7ZprgbF7Ch8RlTaTSuilqE6Vcs0LTK+fHbtcIYUJttKDe4eJNZIrwFOG6l6EK+5P0Yz40car8AwIO0n7BWvUCrm65EarTx1GFppXWzGzaa2ZCmkt2TZ0c9PLvD2Z3FqmmamGFQWwPUcKWrjONVJXt6jcaaVPlcnnbz+dUv7NSoOK3AtJW3SE5NSoLYWcXV9FV1NJW2kvcF6jGnSdv23M1znmaTbpl/DgUUrW5158Ek9rh+K5tSapLix74AikqS4oJqzbOZmchULgpJp7p8nmeq6f4eVtLaz1CVFPqOmWfA6W68keTHciPFTlTGY8t8MXrMbxzcX4K2PJTpnnbz1NauPJ7lmE+DNx5Ni1jnwed5cpeZk+RWTgNsVkkevCKkKYUnuA2UTmC2c9zkgDjqCSJoXQBIlKgqOr2F0OSCSORKYdAkg0gU9yUyegaRyVuiE7LeiwPNkSrZbsOku9NwKCebIrS4T8suRTnO3vZzSVQiqSL+k0/yKbX0Ea30zTu035E6jNHLr80sa+R7J+tKrLWryfoumUIusmRV9F5f8DKlL4eFyXL4MvJrnppJyDVqdP1LuC9pL9ZftMjS62GdrHkajkT2b4f95uaWPfFOOzS3KzJqJjS0moU40+Vyi6vYypQeJrNBbPlGhgmpwTs6PFfzfzVxifabSOUcergr7E4Tr05T/G/xPNXTPoWWCyY5Qkk01TT8o8X1Pp89FlbSbxNvtfp7M5v5fj7r9ROp/ajYMmc2BJnHIgMmLbObBbOjxwnNlfLuhzYrItjpgiNPOpUbGnlsYePaZraeT2J3GsaUXaDWzFY3sNsgzYPwN5RVi6Y5S2GEyoWwm2yACErDpJAp0S2xBzIT3BbITtgFiCsYo2KxsfFgBKIaRCexKaKDqBaC7kA5IOnxzRDpLch5KKmozOnQdBer1MMabbRg6zqj3UXR3UMs22m9jEyttsrPtF1xOo1U8rbbZWbsmT3AbN4i211hRAvcZEKDYjoCYIfBGWgfDgdETBDoozBqGIBBlwhJgtnAtlEFsWw2xUmMBbAbCYLAwsgmjqEHImjkiSehB1E0cT0Io6iTqEEUTR1E0IIo6iTkhByW4SRKQaQrQhINI5INIi0OSOS3CSJSI6HJBpHJBpE2klINI5IJEWmithcthj4FzCAtshSIfIKtvY0kByb8DUxUVSCsrg4bdnpPstp125tRJb32Rf7X/Aw+n6TLrM6x41d8vwl6nt9FpoaPTRww3rdv1flnd/E8d7+q0zOTqyccQ3R6KnWRaIclVtoU8lPjYRydNmlJULuk0wXkd7FfPOSaSfLFxcn9LUJdttvk6c4tLdFVt1yA22ueB8P8y0xzSbXqApJSbS2ZFbpgzdJ+qDhnx1DjC74Fzy5GrUqv1Kzi+1NOt7a9RrmqS9APnBrNJw7W90KnlfYt2Lk6ntwxOadQX1GVgv0lSywvdxbdepaxNQxqU3u939TK0rbyZMj4WyLDm2lu/ozHfjm6Vwdm1Hc6XHhFffJNJ7r0Act6St+S7o8LT75bv0LmZmchTB2jwtSc5pJLZIvprwV03KSS2iv2lhUlYzs4mErlVeRguK+e/CQblVLywqKk7Zgykkvdg96Sq92LpceU69p+zM2ltZ52cqdo931jTrNpnNLeK3+h8/1UuzK4+jOLyZ5U6W8OVVyXMeTjcwoZmpL0L+HNdbnD5sekMuc6RXnNth5GxEmd3CQ3ZFkNg2PhjTJQKYUSaBpBJHRVjIoztHAqJziOUDnAn9BXao5WPcPYBxY5ogphIiqOQyNxxcpJI9H03SduNKqb3dmT0vT/FzptbLdnqs0Vp9A2tsmTZey8i+qk/tQwwU86S3V8+pvYlBKuIwVt+iRi6LJHFkUpK16F/PqILTKEZK8rp+y9PvFq+zyq6jNLUZ5Tlsm6S9F4Keok3slskaWTTuGD4q3TV7eN6/eVcmHt0mbM1+rBv8AYc/kvFV5nuanafk9D0fqUoNQyO/R+TzTfzl7SzcZporVuZ2M5ePoODNiyQptK1wxkYPDPbeL49jI6XmU4q6fhl3Xaieiw/HjcoJpSXon5L8P8ibn+TbjSUlJWhGpwQzY3GUVKL5T8/3lTB1TTZkn3JP2fBdx54T/AFZKX05Ou83ODry2v6NPG3PT3KHNVuvqv4mJmxzxv54te/g+h5MSn82NtS528mZmw6XPN480F8Rc9u0vq15OLfiuL7Rc9+PDN7kWeo1X2dhkt6aab5pbP8GY+o6PqdO3aa+qovObEXNigBNWh8tPlhzHb1EtGsIrHH5zTwKqKMF86NHCuCNVpFrG9hqYuC2GRIUjup8jsbtFVv5izi/VsfQa6FtpHTnS5M/UarttJhFcXviJPk55ElyjGetl6gvVzae4+E08mphF1aIhqot8r8TzWo1s/iUmTp9VNtblfn0nseux5k0tx8cifBh6TJJ1bNLE2zPqovKbYSk2IghyQv0YtyJEnMXTKkipnWzLkitn/VY0157Xrkw8r3N/XrkwM2zZt42NIkwGyZPcGzpgEuRkELiNghaBsEWIITBFjGjDRw2KGxQEUNiiCGgkCkGi4TgGwwGMgNi5BNgMOgLBYTBF0IomjjqFabqOJolIXQEmiaJJAaJokihB1HUTR1AEUEkSkEkTaHJDFEmKGKJnaAqISiGoBKOxF0C1HcJLcJqiUhdAUg0jkhij6k2hCQSQSj7BdpIKaAlGyw4A9ll5yrio8bbD7EkWVjrwLlE6M5VwhKjroNot9L0MtbrceNJ9qac36JcmufH28Ez16/o+ijo9DBdv8pJJzfm34+40TkkltwcetmcnFJFZZKMGE5pcbiMzck7GrM9ocqVgLc604e6ITfkGnEp+BWZ7xXuFJ9rvwKnJPJGnYAbk6QEN20S9myFSdryB/wBJT2a9CvqMvbS5bdDMmRQnTfIhLvk5tWk9gENul9EC222c3t9UC3UU7A0TlUUylnm5QbT2THZ51ibvgUsN6NSd3OSsVXnMvsWGPZplfMt2c5tJvz4DyOlXhKkVVJzypcK92EKTq3p4X8758I08KdexUwx2Sr6FxPbbhbIFanDMTubVDYq5X4QGGKScn52CUrdJ0vLBjr3TE0rfqA93s973YmeVuVJeyDi2l/Fk0ucTOajty/QW8jjynb2JdR3bv3ASdubpbbX4Fwq7Uty0uRPntbPm/UFWqmvc+h55taXNK+IOj55r5d2pm/cw8rLXxTHYcri6bEtoFv0OfU6hExEuB8kJnsaJIlsQmFIDyMzE7DjuKix0FuRTOgh0EBBWWMaMNUxRiE8boOER/Za4MLrgUpQrwLcS5OFeBEoFzSVZoKELklQTRb6fp/iZla2W7L6Tb6LpVFRTW73ZY6nmU8qxxfyw2Qemk8buK3apexTyPvyuV2r2Kyu/ODwY++SXoW4YVLO1SlFpJNre1wl6O9vvMjN1HDppqNub4cEuV7vwb2gzYdRjWVNKLpbcp+G/2D+08iacenvTytTg4Sl6tP8AvK/VnDD0jJjW+TKlFJburtuvojQzwwT1eaLtNRjN3xT83+Ox5LqM88tZmlPE8EG6xwprbw/dvmzHyY/y7f6Vq8UI6XLKbaxyr6Mt4tPODTcGvqheDDKbttv7y/jxuKVWY+Tf9MuLnTszxzSbpHpYdmp08seRJxmmmvVM8mpSg+b+pudL1akkpNX5MMa/Outc3+nntZpM+gytStJSaUr59H+B2HqGfE1U269zc+02jWTBDWY182PadLmLfP3P955k6bbm+ka9Vs4ftBmhXem0gtT1XR6+ChqYyhNfq5Y7OL/ijDYDLnm1zlL9NXNPW6bF8XSa1Z8K3dNNr6p7orL7QdTiq+PFr0lBNftKDXItoeL/AMK1oz6xkyKsuj0k2+WoOL/YyjN983LtUbd0rpfS9xdBpWdEvou2/RQjvwX8KpFXHHdFzEqRlr6qHx4DQEeAyWhLfzss49oIq8zZaW0V9B0oTnl8rMfUNuTNXUOkzJzbyYRZD3ZDdRb9gntyJzTqL+hcRWXmleVv3LOl5RTk7yv6l/RRto236yiNzRJ0jXxKkjM0apI1IHLW0ixAckKgNXAlJohoLwQxEVIqah7Mtz2KGqlSY4msfWvZmBqNpM3NZK0zC1L+Zm/jZVWb3BOfJKOohRHwEx8D4IjQOgixjQiCLEEYUHRQxICI2JMAkiTqOKJD4AbDYuQWkBi2GwXyLoQC+QmRQdNyRxNHUT0Iok6jqDoccTTJ7W/AugNE0EoP0CWNvwT+oAJEpDFifoEsTJuoC0g0g1iYah7E3QRGIyKJUKDSM7QhBJEpEpEANEqG4SW42EL8C6ARgMURqx+wax+w5nqpClCwlD2HrH7BrH7GkwqRW+GcsfsW/hkOBtnJ8VHCkKlAuSgJlHlI1kNVWNzmoRTbbpJeWe06T0+Og0qhs8kt5v39Poil0TpSx1qs8fnauEX49/qbp3+Hx/mdp/HN0mxMsjaVbBZJbNIV+tHY3PMcnTrwDPendHS2V+gLaobSBeza9SF6Ml7oFyVApGX9Rp+oDgk1Jbex2SdwdcoCGTuSXqhFJUydtHJ7NegDadryg0+H6jMiaWTK1LelQUUowpcLYCLSlN+4baaaEHWtivOba7UMTt16C3vf1AynBzhXl7FucFCGGFbJNsDTx7pRTXmx+oV568JJEaqu+lHMm2l+JGHGnnS8IdOFu15GafHScvLdIfTzVjEqTf4DoKkr+ouMGkl6jafjkcLVHbpJeAMmRY47v7vVkOSSq1Xl+ouFZJd74T29wTw3HGvme7fj0Dcm3Sdv9gDdUuQo8Uk22SVQ1NtJNX6+hzgnty/Vj4Y+1O92zsnbixSyS4SsTOsfrerho9E4NrvmuPRI+fajJ3zb9WavXddPUauScm0mYmRnNvXay1UORFgWSnZmzNaEzRYaFTRaVaSFtDpoU0M0we4/HyV48ljHyRpS1jXBYxoRjVlrGkcu1HY43RZSVUKhSSGJ7HNoByR9itOJbbtCJKys3hELG5ySSN3QaR4YK1Tmrv2K3S9N8XOrXk1+pZEnjwaVXkTUJvlR2uvrRpm205FfUtNvHjnSX67T59vp6mBreozbePTyqK2c1y/p6Gn1vUQ0mFdP09PI1eaflX4+vqedaN5eROvoUndnrej5lj6ZqZKClPHj71flJ7r8GzzWnwSyTSStHq8OlWHo2bIml3r4bk3tBPlv2SHn3TwjreaeTpE3pm28kEpPz2J219ePuswsKnPTwUpNpNtJttL6ehf0XUXly5KjeNSXbBr/AEeF/f8AVisWNRyLHH9WLaX0sx82uQa9+1jT4UoqkPcEg4QpJINxs8zWu0RXlFUL0+d4M6bezZacduCpqMXlIrNl9Ur369XpskNVp3CdSUk00/Ka4PHa/SS0WryYJbqLuLfmL4f/AL9DX6Pq3CShJ8Fv7R6T4+jWrxq54Vbrlx8/hz+J2Yv6zy/Yu/5R5Mhkt0gWxSMkMW0G2A9zXMCA4oAbjW5vPhHQRZxrYTBFmCMauGoJ8MhcHP8AVY1dJgrmW5bJfQr418yHzdIDirqHsZWZ/MzQ1U6TRk5ZbvcqKtBNlbNL5WMnLYq5pWmaZiKpLef3mtoI7JmXBXI2NIqijTy30MxsaZ1RpY3sjJ0890aOGWyOWtYu42OTK0GNTJUfaaBb2A7iHLbkAjI9jL1s6RozdpmL1DJTasqRnpmamfNmNqHcmaOoyWnuZWWVyZ0+OMqBkLk4lcm5GQQ+AmCHwRnoLEFsOghUEOitjGmbEdHgTEaiCGccmc2MkSYpsOTFNgEMhnWdYgiiaJW4yMLYreApRb8BrG34LMMO3BYhh9jK74fFBYGxi079DQjhXoNWBehnfKPyzVp/YYtP7GksHsGsPsZXy1X5Zy069Bi069DQWH2CWH2M75T/ACzvgJeDnhXoaLw+wLxL0F/6j8s54a8EfD9i+8XsBLHQ/wD0L8qfZRPbQ9wAcSv0XAUSkEo+wah7DnsuBhEsY4EQh7D8cTbOTkSoDI4/YOERyijWZXIUoewah7DVAmkjSZMrsAcUh0mhM2UCMjRc6NolqMrzZI3jxuknw3/cUcjPR9Hgo9NwtLeSbfu7Z0eDM1oL4MpU69QmKk7dpnacQ+GmKg9mvR0ht2hVU2m6vgao5z2piptL6HZLttsU5tbPdA0ju9p2uPQGb8pgSbW63XoLnkSi2ufQFOeRNOmm14IwpvGrfO6IxYU4fOt3uyxGKUa9OAV0tp7NeeQlfa0/AaVqmc1sAtVIN036smb7WrCUKWx0oprdWJNBjVycvUlQt16sJKkg41u/QVvEdO0eO8rdbJAZH3ZZy96RZ0ddkp+tsqt7P3dmNvs+ghBznSLGHGm0ktl+87TQpSnW9FrBjpWype0+8geyt2KyTq0ufLG5p22lwVckVk+RN+9Gh5/6BJ5Xz8vl+o9KkklREIqKSW/jYYov1peiA7XQhb9WWYQSVvn1F447quENSbkl48iZaoqbfojM6/qfgaJxTptWa3B5b7UZu+TgnslRHkv5yiV4nUTc8spPdtsq5GWM6am0VZs5WVDZKe4JKYJXGhckPoCS2KQqTQlotZI0V5oIotbMdje4l7MODDUVF7HKi1CaM+EyxCZzbybQhIYpFSE9hqmc9yDmyEnJpAdxOOaU03wTwNSM59P0L1CpTn8kG/X+4y9Nr9RpMryYppye7clab9XfLHdS1i1U8cMd/CxQqPu3u3/D7jPZrmcFvv0PU6l6huU8UFkk7c1abfm96K9bhMhKy+obPRM2jwzb1koqKTe/l1stin1HX5dbnyVOcdO5XDFeyS42/aVUgo423sg/fJw+3nFvpMnj1eNtNpumvVF7TNT1M5JUnJtL0VlTTpYUnXzvZe3uaeHGo5slKldr2tJ/xOfz3/E1mKDq2FCFoNQaPMtaFOO3AnNj2LjWwvJG0PNKxlqTwZlNOqZ6rp+ojqNP2tJpqmn5PM6qG1otdF1Xw8ig35O3x75f0nN5eMvqmkeh1uTA77E7g/WL4/Dj7ik3R6/7SaT9J0C1ONXPDbdcuL5/Dn7jxsmdVnvpanKhy3OsFsiyspGnuOx8iIvcs4Vua/0S1jQ+CAxrYfFHPq+1RKRD/VYygJLZhKpGJboLJwTiW5GTgfTjN1jpMycj3Zq617Mx8uzZpk6XOWxUzPZj5sq5Wb5iaHCvmRrYHSRk4uUaeF7IPIrLSwyprc0cM+NzJxS4LuGfCZz2NI1ISGqWxShPbkcp+5Cj3MjvFOYDn7jgOlLZnnOpZ08zSfBs5syhilJvZKzyOp1HfOUr5dmmM9rLd9l6jLdqym92FOTk7YJ15nGTjlycEluMGwXA/GtxEEWMaMtBYghyQuCGpbGVMS2GJgLgJcEgVnNg2c2BIkxbZ0mKchkJs5OxTnuHB2FgWMasuYcd0VcKNLBC6OfeuKkHDH7D44/YZCHsPx4/Y5NaXIXHH7DY4h8MaGxxmN0rhCxewaw+xZjjDWP2J6fFVYvYJYvYtKHsSobcEnxUeL2AeL2Lzh7AOHsA4oSxipY/Y0JQ9hUsYdsTxnvH7CnD2L8sYqUK8FzSbFVYw1Aao0w0kdOKXC4wHwjRyoNNI6skOC2GpC1JIJSNoqUy0gWwbsiQx1EmJm7GSYuQh1Xnyem6Pt0zBfo/3s81JHo+j5I5On40uYXF/VM6v43+1OLmSVJJcsVuMyK17oBb/U679PqLX0YGRpum6Yb2e6FZYd3zRe69RqhE522nyLaathZEpc2mvIptpU3a9UC5XJ2+Bbxp5rlumh8Y2ruwckKXd6CtPo4x8BqFnY1e5YUEHU/olQd2Q4Piiw0k6ZzirsXR+1P4dASjsWZqm0Kauif0XSWqQuU+xP3GZHSsqxTyZU29k7SMtbH1qwax6NvzVFTngZqZ9umjHy2DgXdJGf67VSLuGFYkq3Y+XyY6XLIxpNr2DyptbG+P+pt9qOVtp00vdgYla9vbyOlC21WxDagqVJF9a99OVJcBLfhWLU097tDsNybpUl5oE2jheySosKkgElFX+07uvgO8ZX2nJJQg5vwrPHdVbyTk3vZ6jW5O3A1e72POauFpv1Ofza76L5HkdbCpsz5+Ta6hj3boxsmzZllmWiUwUwolJaLRDWwVEPgaSMiKs0XMiKuTlgatM6LJmLTplGsRkPxzKcZjYyM9ZNfhMap+5RhMdGfuYXAWlP3O72V1MnvJ/IpzkQ2LUrJsOEl8hxQEd2NihUhKI/TzWPIpSVpPj1Aig0rMrTHqcyy6rJliu2Lfyr0Xg1dHleVuckk2kml7KjFmqaXua2hVRTRl573J5vtrwVJB0BB7Iat0ec1iHDYXNbUPfADSYBn6iCaM9N4cqktqZq5o2ypnx3G64N/HrnplqPQ9N1C1Gn7ZU7VNPyjxnVtE9Br8uCn2J3B+sXx+HH3G10fVLFl7Hw9i19qNH+kaCOrxq54d3S5g+fw2f4noeO/rPFf7ZeMZATVAs0lZijyW8PKKceS7p92jX+iX8S2RYhEVhVospbHLu+1wLQMlsNaAkhSm7Ggci5G41sBmVJh0MfW3TMjLyzZ1i2ZkZk9zbFOqU3TK2TdljKqYia3OvKUY9mXsM/BRWzLOJ1QaVGniZZhOjPxzqizCaMbGkq/DM0hq1CooKSCc0k23SRH5V1t9Dj+m9VUHFSw4oOc01abeyX47/cemyaPQY8cpz02JRim23BbJcmf9ldE9N0xZ8irJqX3u+VH/AEV+G/3ivtd1BaXQrTRlU83PtFc/i6X4m8zMwrXketa5TWT4aUFNtpJUkvQ83NtstavK8k6vZFZqysTkYa93oGC+Amjo455ZxxY03km1GKXlt0l+LNYT6J9jei6DP9ncOo1miwZsmWc5KWSCbq6St/QT9t9F03p/ScS02iwYc2bKkpQgk0km3v8AgvvPW9N0kdB0/T6SHGHGoWvLS3f3uzwf28z5dV1fHpsWPJLHpoU2oNrue74XpRPe1rfUeXgixjQMNPn/AOBl/qP8ixj0+Zc4cn9R/kZ6ZGQWw1I6OHLX81k/qP8AIYsOX/hT/qP8jKmhImiXCUNpRcW/DTR1EgDAbCySUIuT4R6SH2PzZIRmtbBKST/m35X1KmbRy348pOdCJTPXz+xOeXGuxr/tv8xT+wuof+0MX9k/zLmKPzXkXMPFPc9T/wCAtT/6hi/sn+YUfsLqE/8A8wxf2T/MdwPzWJilUG090tj6B0bT6fJ0jR5Z4MTnLDFtuCbbow4fY7PGDX6djdqv5t/meo6fpno9Bp9NKam8WNQckqTpc0TjHLexpmc+q/UtPhho5Sx4oRdreMUnyZeOJv6rA8+F41JRbadtXwUl0yS/82P4HJ/J8O9a7ielqsIjYxosrQSX/mL8GGtHJf6a/A5P/pfN/wDr/wD4CEg0hy0sl/pr8A1p2v8ASX4B/wDS+b/9f/8AAQokqKGywuMW+5OvFAGe/Hrx3moYXEBxQ0FozBEoipRRYaFtE2ErSiInAuSiKlERKbW4O6Hzj7CmjXGuJsDbRykQ0C2dmNJpqmxqmVkw0zolSspnXYtO0EjQ+uYDVjKsFoDIki30nVLTar4c3WPLS+j8P+AiSETWzKxq512Kevm9wGk906fqI0mb42kwz5bgrfutmOq+Nmej0uuTa2aAk+3jgKm1T2Fzi99k17FKhWRRycOmValGbT5/eWm4p00kwZw7la5XAqcvHYkn4oPJjuDXhnY00knyhySaom3sK3qti2LMXaEJNZZKtkx0b2IlIOpdYXJPdKyvpdbDKnFtWhPWdU8OBwT+aZgYM2TC1NSbSe5hvzfnXA9XNq9vKFTaSRV0utjqFFJ/NW6GZpqKSb8lTySzqoVqZXUU92wcbqaS8IW59+Ry8LZEfEUE5t8cGd1/bSQzPl+JnjBPaK3LmlSScqMvBc25PmTs2NPGkl68kTXVWL2BPstrdjGrR0UkkkSejjPM8YW+1fKqVIrOr3/AtZbbpIWoJO+X6sjq++i4Qcnb2XoWYJJbVXsAkrGK+ECUtpIFbIlqnfLOatCtClqE5Nt/cZepx2mbWWFoztRDnY5ddpV5jX4U09jzmrxuEmez1eK07R5vqGn52JnpFYy5CiC4tMKJpUNMiRJEgIqfBVyeSxkexWmxhXnwJG5GKLhuuhkZi2cmPgWVMYp+5UjJjVIzuQsqZKmITsJPYi5JYUglIQn7jIvciwLMB8FZXxssY2YaM2KGIWmFZlQHK94/U1tC/lSMbM6nD6mvoJJxW5n5p/gM/Wtj8FiLK0GqTRYg0zzq3hnKIcdiUFVoAqzx27K+aGzL8olbMudhz0VjIV48yktqZ6nQ5I6nSdmRJpppp+U1ueb1ENrov9E1PZPsb2Z3+HfvrPN5ePOa/SS0esy6aV/I6i35T3T/AAKjR637WaTux4tbBbx+Sf0fD/Hb7zyjR131S1OUK5Lmm5KiW5b0y3NP6Q1cKtFpIrYPBcitjl39XAtAtDWgGtyIaYKkLyq0Pitgci2GcY2rhaZlZobM3NRC7M7Pj52Lx9FYeaFMrSW5o6iG5SnGmd2b6Iqg4OnRzRFUV9CzCQ+OSigpNDFkJsOVoLJ7l7o2ifVOp49NTeKPz5n6RXj73sYizeFu3sklyfS/sx0p9L6cnmilqc1Ty+3pH7l+1scz/a57a85ww4ZTm1HHBNt+EkvyPlXXeqy6j1DLqLai3UE/EVx+f3npPtx1vsj/AIq00vmaTztPhcqP38v2r1PCN7NsqzqdX+kN77si16o+lfZPommxdDw5NZpMWXNnvK3kxptJ8Lf2p/eU/tv+gaHpsNPp9Hp8efPPaUMSTUVu3aW1ul+IyufXXgGj1X2D6O9V1D/GWaP8hpm1jtfrTrn6JP8AFoyeidF1HWtYsOJOGGLTy5q2gvRerfhfwPqeHDpOldOjjh24dNp4ct0kly2/X+JXwZz/AGsZcuPDDvy5IQjxc2kr+rFfp+j/AObwf2q/M+Y/anrz63qljxprRYW/hxa3m/8Aea/cvC+phxxw/wB1fgLkh3b7X+naP/m8H9qvzO/TdH/zeH+0X5nxuGKD/wBBfgW8eHH/ALi/Ai6kH7fWf03Sf81h/tF+ZYTtWnaZ8gz4cawSagk69D61p/8ANsX9BfuQS9VL15T7Yq9fpv8ApP8AeeerY9J9rleu0/8A0n+88/2GG/8Aao19U9Wv5Cf0Pqmn/wA2xf0F+5Hy/WRrBP6H1DT/AObYv6C/cjXx/Dwwuufaf/FGu/Rv0KWe4KfcsiXN7VT9DKf2/S/2VL+3X5FX7br/APG1/wBGP72eVyeSpr3wrq9eyf8AhDiv9kz/ALdfkcv8ISf+yp/26/I8M+TkX6L9176H28U+OlyX/fX5FvH9r5ZOOmtf95fkfPcE2mbGly1VnP5d6z8Oar2UPtJklx09r/ur8hq69la/zB/2q/I89gy7Lcu457HFr+T5I0la663mf+oP+1X5ErrGd/6g/wC1X5GdCfuPhP3Mr/M8prq6rnfGhf8Aar8h/T9c9a8qeF4niaTXdd2r9CgpjehO8uu/6i/cdH8b+Rvyb5oNbL/NS+hVTLGZ1hl9CpF2Z/zv95//AAGHEIk4TA1YDQ0FqxcIhxFyiWGhbQuBVnEROJckhGSIiqq0LkOmqYpo38ekVASBSDSOzGkmwGpWKiPhujeUOS2OaDrYhopZEkJlEstCpRALnSM1Rnglwn3L6Pk091una9GedhkeHLHLG9uV6ryjcxZo5ccZ43aa8Hd4tfrJHqaezf3HNOtt/YV3Rvfknv8AR2a9LoZpN21T9yYqvY5zi9mDuuHaFafRNNNMYnTBTTVHNpEd4Ez2afqEmq2FzaeN+24nUZ1j0rknvVEW8DD6pkebVSbdqOyKKfgbOTncm+WKhu2eXrX610LWjn8PIpL7y9mn8Zt3slt9Sjig29kVOo9bwaKDxYmsubdNJ7L6s08ct9RrnNt5GlKax41bSSVttmbrOqabGknlUmuVHc83m1mq1028uRtPhJ0l9wh6d+Wzqniny16ni/g71O16NfafFjpY8DdeW6HQ+2WSMk46aC+smeUenkQsM14ZrPHhvP4fPsezj9sdRk2ShD6L8w5/aLWONxztP6I8XGEk+GizGU6ps2lXn+Jj/wDV6N/afXxdPLFr3ijQ0X2n+I0s0I7+Yv8AgeNtvZoKKaaatbj9DX8Tx37l9Q0ufHqIqeOdp+hbuo2jy32ZxZpJZJSfal+J6lLu8bCs5ePG/keOeLdzKKNNWc0Sklwcws9OcrJG0Us8LTNFq1RXywtHNqcNh6nHaexha7Baex6jUY+djI1eG72M7EV43U4ak2kVkt6NvW4Kb2MnJDtnwVKzq42A2c2C2UReRlafkfNlfJuhmrzdsAKXLBNA5kEs4A5bBpgIJCBqYaYpDERSHFjYsUg0yKFmDHwZVix0WYahrSkFYhMNS9zKwnZd3H6mlorSTXBm3bXszW0dOCMvL/qc+tTDK4otYylj2ou4+Eebqe20NXAxC06GJ2I3SWxUytXRbyOo2UM0krYypGZJpoq4JvDnTXqOyZFXJSzZEnd+Tp8XZWWnr3CGv6dPDk/VyQab9Pc8BmxTw5Z4siqcG4te6PY9C1ayY+xsyPtXpPg62Opivlzrev8AeX5qvwPRz7ite515+ty5p1umUy5p/Bp/TJq6fgux4KenLsTl8n1pE0Q1sHQLIhpgtgcmyYyK2F5eBw4zs/JSy0y7qHyZ+V1Zpk2fqIJtspThvwaGZ2Vpxs6s+i4pONAtFmcRLRpKOFNEUG1QvJtBteEVEvXfYnoT1OddV1UP5HG/5CLX68l/pfRePf6Hp/tJ1vH0fRtpqWqyJrFB+vq/Zft4LnRUl0TQJKl+jY9kv/pQ3LodJqMiyZ9Lhy5Eq7p403XpbQ2knp8czZZ5ss8uXI55JtylJvdt8svdB6Y+rdXwaam8Kffla4UFyr99l959UWh0ONWtJp4JeVjiv4FiEYQVQior0SSQF+UpKMUkkkl44SPk32n6n/jPrGbNB3hx/wAni9KXn73bPbfbLq3+L+mPTYZVqNSnFU94x8v+C+vsfNGk1XgC1f6fRemdX6b0T7L6CWaUY5MmFTWLGk5zb5de/qzx3XvtDrOs5O3I/haZO4YIvb6t+X+z0Mqq/Ci70npeXq+sek0+XFDKoOaWRtJpVdUnvuPvam230zktxkEbfU/sn1Hpeilq80sOTHBpSWNttJur3S2v95jwViqbOHY0XMUdivhjwX8MLOfVMGoh/k09vB9T0/8Am+L+gv3I+aamH+TT28H0vT/5vi/oL9yL8d9NMvNfapXrcH/Tf7zCcD0P2mV6vD/03+8xXAx3f8qVntRzYlkg4Phlt9a6zjiox1zSSSS+FHhfcL1FwxSkluket0fR+m5tDp8mTSQcp4oybbe7aTfkvx/q/BJXgeo6vUa3L8bVZPiZElG6S2XHH1MvImm0fV39n+kPnQ4397/MW/sz0R89Pxv73+ZrJz6VxXyV8nI+s/8AhfoX/p2L8Zfmd/4X6F/6di/GX5ll+K+Vw2ZoaadUfRF9mOhrjp+P8ZfmHH7N9Gjxoca+9/mZ7x+j/FeNwZKSL2PJseoXQulx/V0cF97/ADDXR+nLjTRX3v8AM5Nfxbf7XI85GY+E/c3l0vQLjTx/F/mEum6JcYI/i/zMr/C1/wBhsWEzsMdRgnklptQ8ayNOS7U7a+qNtaDSLjCvxf5hLRaZcYl+LHn+J5cXudBnYs2qprNqHkTXDil+5DoyG6zDixYHLHBJppXbKkJe5yfyM7zvm72mtp2HYiDGJmPTGccjhgDSFSQ5oXJCImS2EzQ+SFyQiVZxEyiW5REziKXhWEUEkEkTR1ePaLHRW4/GKSGw2OvNBvhENIlcHM3lUFq9hUojWC9xhUmgcOoyaedwdpvdPhjsiEKNzQTVz7gamPV48iXc1F+jGqaq09jIauaS9S5r29PpI9jp1yg8f8u6vLEz2t96fIcZbcniNR1HWqbS1M0k9qdEYPtFr9O0sjjnivE1T/Ffkdc3KT3amvUXkyKrvdGRoer4tfivG0siXzQb3X5r3GZcrq7tka0a89VBwac+TJ1fUF8H4La+Vv6lXNmcW2nSfj0MfPklLO3ezRjvfYuLuPUKSauqZYwuL5a9TKxqn9Sv1XVSw4Fig2nPl+xy5z+tch5naf1frjSlptG6hxKa5f09EeceRt22Kc23bYNnqYxMTkej4pMz0t49Q4cUPWt9UZ1hJ2O5j0Mefc9NbHqMcuXRYi4NbNGHGTQ6GRrhsi5duP5HfsazhFu9jo472KEM8o0220aGnyLIk0ON5rOvhixJO2Nx4VknFJbtpHJW6RsdC0L1Gsxuvkg7bo0z/wBZebyZ8eLqvW9O00cGmhCKpJIupERSSSXCJLkfIb1dXtccccNKGhc1aGgtWY7yFDNC09jM1OO09jbyRtGfqMdp7GFKvNazDaexhajE03ses1WHZ7GJq8O7dEVGoxrIbItgtmiATYmXA2TFMIZEluCxskKaNIEM45nFByCQKCQgNcBoWg0RSMQaYCJRFB0WNjIrp0MTM7AsJ7BKQhSDTM7kHJ7o2NE32oxse7SNjSLtSZz+af4nPrVx+CxCVOipjkqRYxu2jzdNYtw3QcU0xcHsNgQp2Z1jZjZ8m7NfMm4NL0MXLBubRpiTqdK2SbfBXyQbRe+E34JeBtcHRNyM7KDo+Z4dQk+Gzf67p/03o2Rw3njSyR+q5X3qzzfa8WVNbUz1vTcyzaZJ7ujs8W5aePnHzu97XDLumfAHVNL+hdRz6eqjGVw/ovdfl9xOle6Oj+kfK2NOXorgo6azQitkcnk+riWiGtw2D5IikrZCcvA7wIzcMIIz8+9mdm2bNDNyzPzeTbBqeR2xTWwUuQWdUMDViJosMVNFQlaSFySaafDHTQplwjI6vVwiox1moSSSSWWSSXotznqdU7vV6h3zeWX5imy50rp2fq2vhpNMqb3nNq1CPlv+HqyvZNf7I9In1TqC1WpeSel00k33ybU58pbvhcv7vU+h6rU4tHpsmo1ElDHjTlJ+wvQaLB07RY9Lpo9uPGqV8t+W36t7s8F9revf4x1H6JpZf5Jie7T2ySXn6Lx+PoFX8jG6x1HL1XqGXV5bXc6hC77Irhf+/LZRCZAmaGO6drZdN6rpddFusU05JeYvZr8GxL4FzVppjn0n23Ljw6zSzxZEp4c0GmvDTX5M+R63RT6f1DNpMn62KbV+q8P71TPon2M1v6d9m9M5O54U8M/rHZfsoxvt109RzafqEF+v/JZGl5W6f4WvuROvTTU7OvMYY8Glghstilp47o1cEPlRyaqYXqo1pJ/Q+iYP5jH/AEV+5HgdXD/JMn0PfYP5jH/RX7ka+H4uMP7RK9Vh/oP95jOBudfV6nF/Qf7zJcTHyf7UVUzYlODi+GqYP6Z1LDBQx9QzxjBJJJqklwuC1KOxVzRtMedWfCV8vVerxuup5/xX5FLN1zrULrqmf8V+Qeq+THKVcKz1+l+zHSNRodPlzaaTnPFGUqyNbtJvyb4uqXLXhP8AxF1z/wBVz/ivyJX2h65/6rn/ABX5Gh9sel6LpWr02LQ4nBTxuU7k3e9Ln7zz6Rt2otsaS+0HW3/tXP8AivyPX/YbqGt1+LXPW6med45wUXNrZNO+DwKR7b/B1/NdS/pw/cxS9Vm3r1mulKGh1E4S7ZRxSaa5TSdM8BDq3VWlfUs/4r8j33Uf/wAu1X/Rn+5nzHG9lv4MfJb/AErVa0Op9TfPUc/4r8hy6h1F/wC0c/4r8jLhOh0ZnPdb/wCp60o67qL56jn/ABX5DoavXvnqGf8AFfkZanT5LGPJ7mWteT/p9auPUaiUHDNqZ5U3dSaLGOfuZuPJZbxz9zi3+tXulStCEh0ZFOEh8JGSllOwhUWMW5RpYDQZDQAhoBosNC2hEruIqcC04i5RJoVGqIodOIFU6Hm8TYFKhkdgaJR240RqZLFphWdOdBzAYTYMjSUFTAgt2/QOTISqDZO7zJV2mh36hLxZPW51HtT4Q7psLyt+hR63O5v60Yfx56tE+PM6h1JspT5LeodtlWR35+JAnKElKLcZLhp0195raDXazJDI82VzhFJLuSu/r9DKas0dOvh6PHFcyuTf1/uDd9FU5s+Sbdvb0QGGM8snSbo6rbsdppqD3OX/APoF8GcWm019TznVs0p6uafEXSR7KGSE1tTPK9f0jhq5ZYq4zd7eDb+Pyb9t/F9ZHccmBVMI73fi0Vkpg2SmDolGmGnQpMNMmts6PUti/wBNdyaszEzX6TBNtvgTs8Wu6a+HDbVnquguGCDxOk5O0/c8tPL2Ko+C90zqFzWPI6a4ZpOc4y/l+LXl8de6RxX0edZ8Cle62f1LBUr5qyy8rjjjhk5kMk4mwFSVoq5oWnsXWJyRuzn1AxtRj2exj6rDd7Ho8+PZmXqcV3sZWJrwZDVrYOjqDrJXkmgGWpQtCJwaZU10yGhckPaFSRpAUcc0cWHIJcghIQEhiFoYiKQ0SiESiKBBJgoJcEgSYaYtEoXCWcL+ZG5pN4IwML+ZG5oZ0kqOXzz0caeOPBZgqoRidpMsRR5m42h8XSGxdIRBjk9jNSW72KmTCnNtcFtK+QWkgCqsNeCXjSXA+gJKxwuMrVwq2aXQc9S7G37FXVwbi/YT0zI8erVulZ2+DTP5o/7Y6RKWDWRXP8nN/tT/AHowdNyj3XVdP+m9IzYkrk4XH6rdftR4bDynwejfg3PbX0vg0Y+DO0r4NGPCOXyfRBsGtwmQuTJTvAjPwWGtivn4Y4Izc3LM/N5L+byZ+bydHjUpS5BfAUnuC2dMMDFyDbFSZUKlTFMZNimy5E03SaXPrtXj02lxvJmyOkl+1t+EvLPqfQei4ejaJYYNTzTp5ctfrP29EvCPM/4OIxll6lNxTkuxJ1uk7tX9yPcThDJCUJpOMk00/Kfgd9elSPD/AGt+0yyd/TenZLhus+WL59Yp+nq/uPG2vVfifXYdE6TGu3pulVf/ALS/IfHp+hh+po8EfpiS/gIrOvjaTm6im36JWFkw5ccVLJinBS4cotJ/Sz7HPLpdMrnkw4kvVqJ4n7ea/R6vHo4aXVYs8oSm5LHNSq0qugKzkeNbBOs4EV7T/Bvqu3Ua7Qt7SUcsV9Nn+9HrOvaNa3o+owpXJQ74fVbr91fefOfshqP0b7T6Nt0svdifva2/akfWOVutvoGvbTPuPlemjbTXk2MMKSK2bTfo3UdRgraGRpfS7X7KL+KOyODX0SE62P8AkmT6HuMP8xj/AKC/ceK1y/yPJ9D22H+Zx/0V+46PB8Ux+tq9Rj/ofxMto1+sq8+P+h/EzXEw8n+9NXlEr5Y7MuSWxXyrZkwqxtev5CaS5VH0rTw+Fp8WP/dgl+CSPA/A/SNdpMH/ABM0U/pdv9iZ9DOvx/6lHzf7eZfidehBf+Vgivvbb/ijziRrfabN+kfaLWzTtRn8Nfckv3pmYkXWV+pirPaf4PVWPqP9OH7meQjE9j/g/VQ6j/Th+5hFZ+vUdQt9O1SStvFNJeuzPmcMGdJfyGXj/cf5H1STUU22kkrbbpJCP07R/wDNYP7RfmFz1dnXzdYc/wDwcn9R/kGsWf8A4WT+o/yPov6bo/8Am8H9ovzO/TtJ/wA1h/tV+ZH/AJxP4fPVjzr/AMrJ/Uf5Dcayp74sn9R/ke9/TtH/AM1g/tF+Zy1ujv8AzrD/AGq/MV8Mo/H/AMvFY8lOnaa5T8FzHk9zN1eVPqOpaaaeWTTTtNWxuHIcHl8fEytjHOy1CZmYZ35LeOeyOLWeVpKvwkPiyljmWISJUsHARYYzdQDQZwwS4i5IsNC2ibCV5R2ETjRakhU0TSV1zTJRMo0yEaY3wkkpkHHXnZCIkdZDdo3mgVMhuoJe5MwZvZIjza5kq0OnJLHKRg9Ylc39T0GlXbpG/Y851N3J/UrweswX4wcztsQ0WM0GmxLVHZPiSntf0NNpRhCPpFL9hnSXJo5H8y9kv3Br4moUBuPTyljyZa2hVk4Y90kvU9F0/RRnppYmlU01deTk3ffII81p83dl7Gq97LOr0cdRgcZq01sxebSz0+scWnV0XsPFN7ejJvq9jXPqvC6zTPDlcbumVqPWdb6XcXnwq15XlHmMmNxdNUel4vJ+478Xs9E0TTJSJpmvXTmdckwkiUSkJvnIsatpG/0+Cx47fkwsezTfqbeF3BOLdPiybXX4Is5Xb2bV+ohZHGVp7pj1JNU0IyY3dx3QTrtz/wAej6D1r4eRYsz52v1PY4csMsFODTTPlUE001s0eo6B1dxnHDmlz6+TWXryP5/8GX/7njexOBjJSimnafARUrwHHHHAEMCStDAWjPUCplhaZn58d2a047FTNjtM57Cr5gkEokpDIwMbWIVADJiTXG5ZUPYL4dkzRsfJBxfAho1c+G1wZ+SDi+DfGukrSQFDWgGjeUwhLkiiVyAEhiFoNEUjESiESiKBolAoIRJRKIRKEDMTqa+puaLeKMPH+sja6e+Dn809HGvhtLYsxltRXxqqHJbnlbaw+G41ARVIO6MliT2Obs5cAvhsAhukC2heTJT5EvMl5LkK1OZ7NPyZqfZnTXhlnJlu9yjmnU0/c6fDOVlqvaaDJ8TTRp3seP6hpv0XqmfElUe7uj9Huv4/gb/QtQpwUb8Ff7TaesuDUpc3CT/av4no5vcqvvKjpvBox4RnabwaMP1Uc/k+lkxrYFLcPwQluYLS1sVc/DLjWxVzrZlZDMzLkz865NLMuSjmV2dPjUypvdgNjc0KbaEN0dcNDYmb5CkxGSexUiaCc9xLnuDknvyB3GsiVvS67WaJzej1ebT99d3wptXXF19S0ut9Y/8AVtZ/asykybGGlLq/U5/r9T1jX/Wf5iMmq1GT+c1Wef8ASyyf72VbZKZJDai3bVv1e5y22WwN1zsOwafPqHWnw5Mr9IQb/cIiziaJpkkPGmpKSbTTtNOmn6pmjhzaltN6vUf2svzKOGNtGjghbRjvVhxo6XunNSnJyb5bbbf3s1ca2Rn6KG6ZqQWyOO320hWuX+R5Poexw/zOP+iv3Hkdev8AIsn0PXYf5nH/AEV+46vB8qmZ1ZXnx/0f4me1saXVFebH/R/iUWjHyf701eSK2RclyapFTUSUIOTdJKyckZ0DT/H628rVw08G7/8Aqey/ZZ62Tai2lbS2XuZfQdG9LoFOcay5n8Sd8q+F9yr9ppqUXJxTTa3a8qzuzOThPjmZZXqcrzprK5tzT5Tt3+0KMT1f2y6M8ed9SwQvHkaWZJfqy4T+j/f9TzMIk1lZxMY8HrvsCqh1H+nD9zPLRjbPV/YRVHqH9OH7mPKs/XpeoJPp+pT4eKf7mfM46bFS/k48eiPpmu30Go/6Uv3M+eqGyFo9K/6Pi/4cfwO/R8X/AA4/gi12epHb7EdQqvT4v+HH8EC8ONO1BJ/QtuIuUQ7SLTaZZw5Cs1Ryk0ydZ7Ca+HJxuXsWQxcOWvJew5eNzh8njVK18c9izjlfkzMWQuY5nJqcaSr0JDUypjmPjImKORwKYRRuoBoMhqwBLQpoe0A0TSV5RFONFpoW4k0EkNUG40D9S874XAsi9wmgGjpztIJg8tBS9zsauaXuLza76JpJdmi+qPN67ebPT51WkS9jzesjc2dfj9SHr4yM0E0ynODT4NLJHkqZYJnTmoU5IvPdp82kU5xaZdwpbN8KNt+xWvia0en4XOaSW7/Yj1uiwqEEkvBi9Gw90VNqmz0mGNJGEx3XV5z/AGzdfoYS12HN2pxk3GW3qmjJ1OkeHO1F2k+D1mTGskK8ppr6o871rFOOfujs3TsflxydXScWNSTjKCaappnlPtB0h6XI8mOL+HLde3sev0MHlai3TfqX9V0p6rTSwZaaa2dcMw8W7L2NfHv83r5A4NPglLY2OsdLyaDUyhKLST2Mtw3PTzr9Tr1/FJZ2F1YaiSokpUU3zP8AqKpmp06bk/hv6ozkm2avTMDtzS42QcdPjnL1d7FVPZ+oKTTpqqHTg2rSprlAP5kk0OR0SuUIyV8NE/ClHtnG1TtNeCVBpWraH43tT4NJE61x6z7O62Wq0jhk/Xxun7m0eJ6Vq3o9ZGXEJbT+nqe0i1JJp2mrQf2+Z/m+L/z8nZ8ojjmQDjSQccIBkrEZIWiwwJqzDUD5XGI+EAYRHwicVrFyh7BdnsNjDgaoexHVKWTFa4M3UYLvY33jtcFLPh52LzrlKx52cGm00KaNHU4abaRRapnXnXUlNHJUMa3Bo06EoJEJBJE0CRKIRKIoEggUchUhpkrgElMQMjsza6c7SMRPg1+myWxh5f8AURvYnsiwkIw00h+1HleSNodBpoMTC0rHLizJaVfBGR9sGwoi9QrxNLkAzc2ZOb3K8sl+SckKm0LcNzfMjK0Mpsq5pOti6sTb4Olpk48G2NSVNlP6Dmay02ei6vh/SOl5KVyilNfVb/us8toW8GpT4t0eywtZNOk901T+h3eOyrx848pp2rTXk0oP5UZ8cbw554nzCTj+DL2N/KY+UoeuDlyCnsFBWzmtWN8FTOtmXHsipm8lZDPyrkp5lSZey+ShqHszp8Zs/IrbTKWaPY2/BbyTVlbO7TOzJqc5pXuVMuT3J1EnF8lOU22b5ymicrZydtL1aQuybdp+jTNOE+gr/B0/PVf/APR/eNj/AIO8S/W6pkf0wpfxKq/wkZWtulQ/t3+QEv8ACLrX+p0zAvrlb/gjPmj9NXH/AIPunJr4us1UvZdq/gy7h+xPQsbTlhzZf6eV/wAKPK5Pt/1ia/k8Gjx//ZJv9rKWb7YdfzJr9NWNP/h4or9rTYuX/o7H0jT9A6Ppt8PTtOn6yh3P8XYWq6l03QYpQzavT4KTSh3pPj0W58lz9S6hq/8AOdfqcqfiWV1+F0IhBJ2kk/Umwfr/AIJLdhpbnJDIxtkWoOww4NDTw4KuGPBo6eHByeTRxo6SFJF/Gtgui6Fa7BknHL2dk+1pxvek/X3NRdHa/wDPT/8At/vIni3ffGkY2vV6OaS3apL1Z6zGu3FBPlJJ/gUsXTMUckZ5ZPI4O0qpJ+teS+2oxbk0klbb8HV4sXE9mzep754L0j/EotD9Rl+Nmc1dcK/QTI5t3urTJycMjQaP9O1ic1/IYWnL0k+Uv4sbj089XleLG6S/Xn/ur8zZS0+g0tbQxQXL8v8Ai2a+LH90harU49Jp5580qhBW/V+iXuz57j+0Oo0n2ln1HLcsOaoZca3qC4r3XPvv6mp1vqGTXT4ccMf1IP8Ae/f9x5XJHvyyTVqzab9s9V9YhLT67SKcHDNp80dmt1JM8X1n7OZtFOWfSReXTPelvKHs/Ve/4mb0XrGs6LNxxr42kk7lhbqn5afh/sZ7vpvWtB1OP+TZl8TzintNfd5+qsvk0frT5/jgen+xCpdQ/pw/czc1XSNBqpd2XTxU3zKHyt/hyR03pOn6Y836PKbWZptTadUq229ycyyiTlWdb/mWo/6Uv3M8Eo7L6H0HNBZcU8bdKcWr9LVGIvs1iS/zqf8AUQtS07HmqBaPT/8AhrF/zU/6iO/8NYv+an/URP5qeV5ZoCSPVP7MYn/rc/6iBf2WxP8A1uf9RD/NH5ryUkLZ69/ZPE/9cyf1F+Z5zq2iWg189NGbmoJPuapu0mFlibLFSE6Zdw5ONzPfIzHNp8mW8diW1hycF7FPjcxcOTjc0MOTjc8/yY4uVq45liEjPxTui1CRz8aSrcZDUyvF2NTAzDmQmSMwtWA0NYLWwuAloBoe0A0TYREoi5Rrge0C0SCK2IcRrjRFFZ1xNitkVKztOryr6h51SVHaSN5V9S+91CaOr2wpex5/VwubPQ6xfIl7GLnhc2d11zkGmTkx3exVyQaNbJj9ivPDd7GudoY2WF3sFnn8PBBJbySb+i4LmTTqwup6L4em00rT7sMXS8Xv/E3l6OPQ9CnGekxuLW6R6DHweI+zeqcE8De6e30PYafKpRTsqK8d9cXEZ/VtP8XA5JW4r9hdU1QM2pRae+3BW5NZ4tg9JjeZJ8pnolwY+jw/C18oeE3T+411wYfxs/noYH2q6dHU6N5or548nzTNjcMjVeT7RmxrLinjkrUk0fKus6f4OtyQqqbOn5Xp/wAHfe5rICjG3wGsbb4NfpPSZ6uadNRXLfk1kep6nuqmi0E9RlUYqk3u34PTrQLRwUEri+WWtJpcWHH8PHBKS3t87FrLkhkhWROMltT8o0zGG/P75n4x82BNXBb+fcqSwtO0vqjVlFqV02nw/AEsSTTqk+UV+W2PLYpwioJKStPkKOBzlUZUvcKWJqb7d0wkpwacVfsORV1/covgtNJ8rz6nq+kZJT0UVJ247WeewwU5rfxwel6dieLSpSVNuw1Hlfzd9zJVwgkhmdeW4444QcQ1aJIfBOvhvmONbItY4lfDukXMaPNrCGQiNUPY6CGqJmuF9gjNiu9i8oAZMdrgJTsYOpw3exkajF2SbS2PT58d3sZWqwWnsb+PbOxjNA0PyY3CTTQujqlIKTJSJokXQ5EkEiJyJBsmxASZKYJNiA0zS0EmmmmZaexodOdyr3M/J/qI9PpZXFW9yynZRwpqKaLcG20eT5GsWY8DFwLXAUW2YVoYuCJK9mEtkc1bDgZ2fD81oWsPqjRnBPwLUEnuhy0uKqwpPgNwSjuixSXgCaXay5SsZeaPZlUl6np+lZO/TJN+DzWq5NboWbbtb3PQ8GvSJ6pPVMXw+pzaW2RKX38P9xON/KW+u46eHKlw3F/fuv3Mp438pfmntV+nR4GQFwew2HBx6N0nsVM75LOR0U875CUKeZ7NmZqZ1e5oZnsZOqfJ1eIKc5u+RGSTaDabYEo2dsNRzwU7KE4OLNecL8FXLiu9jfOiUKJS3GvHTJUDTsICQaj7BKFBqFE2gCgF27hpBKJNpBUBiQahYax+xnrRBSHY47kKDHY47mOqFjDDg0dPHgqYI8Ghp47o496VGl0rVavp+PLDBixTjkn33NtNbJVt9DRXV+oP/VtP/WZRwrZFmCKz5tycXFhdS18tvhYI++7Ilkz5d82Vz9kqS+4GKCod8uterTQ0A1uMYEiTL6Xq46Setlkttzj2xXL2f4L3K+u1OXVT7srpL9WC4X5v3G5ElbS3KuXybfu2cTWbq9ov6GMo3Jv3NjWuoP6GXFWy8s79HCF+Bv6NCbTaprhrlBY47FiCK6Z2m1fUtMksGvyqK4jNqaX42X8fXOrxSTnp5+7xtfuZQihqQ/1TX/8AH3Vf9zS/1H+ZH+Puq/7ul/qP8ynRDQfqjtXX1/qv+5pf6j/MF/aDqq/0NL/Uf5lNoXIf6o7V1/aPqq/0NL/Uf5gP7TdVX+hpf6j/ADKE2Jk9g/VT2tN/anqqf83pf6j/ADMrX6vLr9VLU51FTkkmoJpbKvItsU2O20rbQkp0yLORNSs4Z00aGCd1uZUdmi9p220c3kz0Rr4ZN0Xcb4M/Bwi5jZyXDWLkGOiyrB7Dosi5UsJhWKTCTJ4ZlkA2c2SHMBolshskBYDQbAZNAWgGqGPgFkjitm8DtAryoTmW5Z6cryGvi97ib9WtWtvuMnJG5M1tW+foZ1W2dnkv+QqtKG1sr5FRcyFecb3KzS4oZY87FbU5J5Eu+TdJJX4SVIv5IFXLjuzozRxnYsr02oWWNquV6o9ZoOowyY01LlHlc+J77FfFqc2lyfK24vlG0rO9zex9DWsVchw1abqzx2Hqjmldp+5e0urlkyqraRF8nPR/t6iEE86yrzGn9S2ijosjap+UXkaYsaQSR8967pnk6lkUIuTcqSStvc+hIUtPhU3kWKCm+ZUr/E3/ADbyujweb/x1+uPGdO+yefIoz1SWKHLV2/7j076biwYYxwRUVFUjRohukO3h+T+T5PJe2sLLjnGpdqckqdeRkcK1WFJqsiVqzQy44Td8P9gl4WuE0/DW6Lz5M1U83Z/8sx4GsSUopO655FTwzqPbFNPzfBswwxap7+toL9HiqpUayytZ/I4wYadKbe9+gxYF37Ld+DXjpF3tpNt8+g+Gijdza+iH6PX8qKOh0Clk+JNVFftZspUqISUUklS9CSXD5PJd3tccccKoQcccQHEPg45vYi30b5lhfBexblDC+C7he6PP0xi5jWxYghGLwWYIyqxqIM4bDoomUdiTZubHtwUM+K72NnJC0ynlx8lSpsed1Wn3exnyhTo9HqMNp7GTqNO92uTpxv8A6zs4oUdQbi06ZDRt0gUSSwWASRZxzAnWSmDZ1iA7LvT51kKCZa0TrKvcnc9G9dpacEXIpJooaGdwRoLg8fyTla5+GrgZBV4E20MhK0YVoagktgL3Di7QAE+QGtg5cgN0ADwhU2dPJu0txE8rW7TKkTaTqIppitFqJYcqUXSsPNkTRUxNvMkt9zv8HWd+vRavPLPo2pb1TX3FXE9h2PHKWmVp01RWxulR0eafKc7fqxjdj48FbG9y0tkcO/qwZGUM73ZdyPYoZnuxQKmbgy9SrZpZWZ2fefsdXiCo4WC4FiiGtjo/RqcoewmeO/BelCwHjvwObDNlht3RCxVyjSeHbgXLHSNJslJ416HfDPTaPR9Ix9AwdQ6hi1M5Zcs4Vhl6N1t9ER3/AGXf+qdS/rL8zT2Hmvhv0Djjb8Hou/7Mf8p1L+svzJWX7MJ/5p1L+svzEOMGGP2HRx+xuLL9m/Gk6j+K/Mt6HT9C6hPNi02DWQyY8TyXllS2+jflozubRx5xYm/AePDT4LcMdpOh0cPscmtp4XhxtVsX8EN0WOkaXHly5c+oinptPG5X/pSa2X/v2DxxU8jkoKCbtRXCXoY7lklv9qh+KNJD4qgdPF5NZhwRVpu5+0Vz+X3l3Ngm80vh4JKCdKk6fuVnF/P6UQgqDWDL/wAKf4BfBy1/NT/Af51/xRTFyHThOFd8XG+LVCZFfARkexUyvZljUScccmuUrO1Ci+k6DKoJTyQbk0qbfua5z2WprD17qJSxx9i1r3vRb0PT9Jk6Zj12p1602Oc3BKULVpvzfszXMt+I4pwSSHRLi0vSFx1zF/Zv8w1p+kr/AG3i/s3+ZX4p8VYjUW4aHR5dPny6TqMM7wx7pKMK+nn2KSZNln0GEN7DOm6jCta9JrIReLUKoZGlcJeN/R/vA1uOeknkx5FU4efDXhj/AD6BbYqTNWebS6Xpugy5OnQ1E8+Luk+6qe35lSXVdEk2+hw2/wD3v7h/n/5JQmyvNm/k1PT49H0/UF0iEvjTcPh/Eqqve/PBnvq/TVz9n4P/AL39xX5KxlyYts3seTp/Uej9Sz4emQ0uTTJJPvcnb8nnm9ws4mivySgLCTJpGx5Rd0q3RShyi/peTDZxqYU1FMtQYrFG4IbFNMysaHxY1MREamZ3KjkwkxSYSZlcg2zrF2dZFyBNgtguVAuZjTG2C3YHcc2SBNkNgNnWLgLybtFzp6+dspS5L/T1yzbwz/OJ/tOrl8zRTSuyxrXUmUseRW0zp3P8hfqJ8ipKxmTkW2aZhkziV5wbLcqYHZe7NYfFJ6ZS3ZVz6WCfCNXI6Wxnaht2awXMJx6dJqkafT8aWQo6fIm6fJpaVqM01wc/l7Kz/L0OlSSRfRQ0k00qL0XaN/FVwaJSIRPg78lUMVkdIayvltIx814CJZKYWPJb5Ks21Ibhe55/7v6T/a/B2NQjG+B64PS/j30pxxxx1E4446xdNxxxxIQQSQRQ5gNkt0A2RSr5phfBewvdGZhfBo4PBw6ZRfwlzGuCphWyLmNGNaQ6KGNWgYoYkSavOGxWyQsvyjaEZIAGblx3ZnajDzsbU4XZTzY7vYqXibHndRhptpFNpp0buow2nsZeoxNNtI6ca6ysVGQ0E0QakFkEtEMYQcccwDkx+mlWWP1K9h43U0+As9G9f0+ScE0zUg20jE6XO0kbePhHj+ecrXJ3KDilQpbsarSo5Vwa5GWKTaYadoDQ/NlXNOtkWZNUUc0vmYyqO9IXkmn6FfJN2xTk75NcxFp04pp7iMEezOnzuF3NrkBSamn7nb4kX69ho0paZKk9jGzx+HqcsPCk2voafS53BK/BT6rDs1aktu+Kb+q2N9e8tL8Jwu2i54KGnfzl5vY4vJPYhWTgoZvJfyPZlPKk7FIbOzPZlDLuzRzxKWSDbbR04gVyGw3FrwLa3L6aHuHGKZCVsao0gAHEVKFpllxs7sVU2lY5TamizafT/ZLSPVaJauL1E0oOfbTt72J/xj0r/wCX4f27/IHQ9T1Wj0UdItPpM2OMnJPLFt23frXksf461H/p/Tf7J/mb3c/6RD6j0r/5fh/bv8jl1Dpf/wAvw/t3+RYXWdS/9ndN/sn+YS6vqX/s7p39m/zF+5/0ELqHTXx0CC/77/I0ej6rRZ9Tqoafpi0uVaeTc1k7rW21fh+AiPVdS/8AUOnr/tv8x2LqWp7pf5JosfdFxcscGnT97J/9cz7QzYYqS28DHBpKMI905NKKXlvZFiOPY0Oj6b4monq2rWK44k3Vy8v7uPvOLGb5NcIOpxrSaXF0/G7cfnyyX+lJ7/8Av7iNLp3PFkyppQg6bfl+iHz6dq8km7x983bfddXy6GaiCWBaXTuoY1Sfq/LZW89t3ucn9KK0d4tJqNa21LJ8mNrlJctfff4BRw6ieDHmx6/PPHJJ2p8PymMxqb02PFkUagqSSpFbHDPodQnpYvJjyOpYfX3Xo/c1zuW/iUCcdRFf57qG3skpbt+hZwrNo05Z9RlzZpLbHKdqC9X7lmWKOJzy4UpzWy3T+HtuVGnbbbbbtt+WXda8c932YZylKTlOTk/LYnvxfGWKeWGO033TdIbIRlhCf60U/qjLN99puzYNNkg4/wCMtKm1W81+Y3VaGGHp+j02bWYMTxRce6bpS44soQ02PP1HTadQVOffLbwt/wAjR6itP1LJoscpdqy/E+G9nbVbffR14k/PqJYWq6Zp8ltdX0TaTaSlu9uOQMST+x2ktf6zPn6sy9ZjWPU5IOCjKDaarho29A9Evsnpv0/Jlhj/AEidPFG3dv2e1WVmSz0lkqK9F+A1RXoi4pfZ/wAarWf2X9wan0D/AJnWf2X9xP5oF0NVp+sV/wAKP8SqmanT1039D6nLp+bPkk8S71ljVc1Wy9zGnNQi5NWkr2HufALPBZcbi+fD9C5l1ePqPR3+kZIw12mXa+508kfDXq/4/UPBpun5njhHq+H4mSkodu9vxzzZR6np1pM2fB3KTht3JVe1/wAQ9z6GrkzaHH0bpf6fDUTbw/L8KtuLu39ClPVdAaalg6h+z8yz2abW9I6dD/GOlwzxYalHJNJ262q9uCrPpWnkmv8AHOgX/wB/95V734Ku5c3RofZ3SSli1j0jyyWNJrvTt3e/HJlvU/Zrzg6l+K/M08vTMX/hzSaZ9T0ijDLJrM5/JO29k75V/sMt9E07/wBudP8A6/8AeO/fhXq3pZ9Ln9nusPpcdTHaPxPj1bfiq+88ze56LFptN03ofVsX+M9HqcmoUXCOKab2fFXvyebvcWk6GmHFi0w4rczqT8fKNDSrdGfj5NHSLdGOjjZwr5EPSsTh/VQ9ENY5KhiIStBpBw0olM6iCbkJs5shgyIuCdJinLcmbpCHOmYbwOndxHcJczu8z/A6d3e5HeJcwe/dFzxjqxyzS0CqLZmxdmpo1WJsPFP8xPqlr505GSstSuy71OdOX3mI8tM67jt6m321VkU0c2Z2PPTW/wC0tRy2i5nipTA6VClNWF3qi5FwrN5KOZXZbyOytk4KhqbuLtFvTanhN7lPKJU2nae4az+oz09joNUmkr3NjDkUktzwul1rxtNs9HoNbHJFNMxxbm8qJrlb6YSZXw5FJLcensd/j20cxOVbMcKyLYnze4GbmVOw8L3RGdbnYXued/8Akj+1/G9kPXBWxPYsx4PR/j1f9JOOOZ2E44g6yOhJDZxwrQ4hs4hsi0BboTOQc5UVpz3ZnaT5vge6NTTvZGThe6Zp6eV0cmkSNTCtkXMaKOGXBfxU0jGrPghiQMUNihGFrYVOFlhoBoVClkgVckLTNGcLK2SHOwiZWbHzsZmpw7N0b2XGtyjnw2nsaZrPUeazQ7ZPYUzS1enaTaRnuLXKOrOuxmBkNBNP0BZYCcyWQMBZKdM5kLkYei6Zka7a9Dfw5LSPNdKlaX4HpcCVJnmfyJ7aZW8avcamqFxdIKrOCxrBqmyX7ApNHOVCMGVuMWyjJt2y1mmmqRXSsaap5Iu7FNF+WO/AiWF3wazSeK9bENNNMsrC/QjJClwdPj2mxsdHnfbv7DutxdYp+jaf3/8A/DO6RkSy9t0a/V136GTV/K0/21/E7c+5xc+MfTO5mg3sjN0r+dmhexx+WexC8itFTInuWsjK83Ysw1DMrXBWa9jRyQUnYl4fY6JOQlFwsB40y68PsB8Fp8AaqsNboLsa5Ra7PY7sF01VQY7T67V9Nx5pYY4ZqbTfxYXVLxuG8e+wvVQrTZNvA87svo251HqOrwLSvT4tNWXBGc+7He75qnwVF1fqT/8AK0n9k/zLms1ut0uDRQ0uWEIvTQbUoJtuisuq9Vf+sYv7NG2/JJb/AJA/RdS1efp+uyzxadZcMoqFY6Tvm1e4qPUuqzVrBpmvVYG/4jtLrdTrum9Qjqpwm8copOMUtnuI6bny48k9HLXz0974W0mrb3TbXkV3bqSUjVr+qv8A8jT/ANg/zGx1nVH/AOTp1/2X+ZGaXVtPJrNrppeH2xpr60WdLPW4ks+s1c5pr5MNJOXu9tkTNW2z9X1/8GTknmzKL1EIxaTXyxaTK09HgSc5wSrduzQnLJlm55HbfC8L2QpS071cYanLGGOHzNS/0n4X08nH78nk5KBaHTR0WneVR7c+oXHmEPH3/wDvwPxwpcEz1GknklOWsxNv67L0JnlxRxxeLLGffJRTS4vyaeTG9X/4n/8AA6m5KEF3TfC9PdjEliTWN903tPJXHsgkkouOO+1/rTfMn+RVeoyafJJZdPLJidU4Va+4vEmf8c33/wBNXyYs+izvU6Nt3vkg3an9ff3LePJi12J5dPtJfr43ymTDVaLM6jnWOT/0cqp/tK2r0mXBlWq0klDLV7O1JejRrJZOb+BMtthORj/0iGrwLL2PHmTqcPf1RU1E1DFKT2SVkXPLwI0cvhw6hrvOOHwoP3e7/gU+oZJ4Oh9H1OJ/ymFd696otalPT9CwYHtPKnlmvruv3r8Cp1F//DnTE+Hjf8DqnqcTVX7R4oS1GHqODfDrMammvDpWv3ftJh/+jtJ//cz/AHsT0/UYtX9m9T0/UZYQzaSfxMCnJJyTttK+fKr3Rc0Oo0+n+yOlnqtI9VB6iaUFLtp297L59JlJ+waa9C6uqdJ8dCn/AG7JXVOl/wDoc/7dkfn/AOQPob/ybrP/AEY/xKLafJr9P1ei1Gg6r+h6B6SccK725uXdzX0rf8TCcg1PgN09LqmgpJf5RD96LP2iyLH1bVOXHev3IpaWV9V6ev8A+TD96D+1Mv8A8X1a/wDrX7kOT1C/poa3pXR9Fm+Dq+qPFkpOnhb2fHBVnovs60761X/Yf5Dvta1/jl7f+VD+J57I9nt+wLZLwWvV59L0dfZnRYp9TrSxyyePN8Jvudu1Xirf4GQ9B9m3/t1r/wDx3+QWqf8A8EdM2/1ifj3kK+zmLTdShrOl6jFBanJBz0+Zx3TXKv04f0sr+y+1Z0vROj6yOf8AQOrvNkw43kcVhrZfX3PPxlaT9Tc+yMZY9f1bHki4zjo5Rkn4adNHn4PZfQWomrMXsNgIg7HwMaR2Nbo0tGt0Z8DS0a4MdVTXxfqofETjWyHxCNIYkGkDFBjNxD5JBfIwhugGyWxbYuEHI9mVZOmPyPYqye7IuSF3Ed4DYDZP4Brn7gqdtCnI6DbmvqV+PRNHG7o2dKqwGPhXBt4dsH3HN4p/9xWWD1Z1f3nnps3urvZ/eedmz0MxGvofiNMt6fUp0myhPkX3uDtM0/PS7xu9+1phLJaoysOspU2WVnTVpon8rmlqTtFbIyfjJrdick0/IcX0nI7sqydMfkklyylmypWky5EWrEZ+5d0eqnhmpJtryjEx5X31ZewzTSMfLjjG+3uenayOWCafJs45ppbngumap4cyi3UX+xnr9Fm70qdi8e2mNf1WlYue6CT2BktjfV7GihnQvE6Y/OitB1M4N+tIrQxPZFmJVwvZFqD2O7+Pfap8EQ2S2C2dmqHNnWC3RCkY/oDs6wbIch/odE2kLnOkDKfuIyZElyTdE7Lkq9ypPJudly3e5Unk9GZ2l14bC+DR0890ZOGXBoad8GGkxtYJbIv4ZMzNO9kaOF8GFWvY3ZYjVFbGPixKG0C0GnZDQETKImcLstNC5REShkhzsVMmPnY1MkLRUyQrwBVk58KaaoysmmSk9j0OSCa4KebCm7o0zrjOxhz0/sIlga4RtywewqWn9jWbTxiSxteAGjXnp9uCtk03saTyQM9oGi1PA1whMsbW9FzUpL/TZ9rXsz1WkyXBbnkNE2ptM9HopvtW5yfyM9XmtiMxsWVoSQ3uPN1GsNlOlYmeSkdOdIryn3bWQLXNuTHYoXVgwjZYxrYCgHBAvGvQsdtsGSoUVxWcK3QjMqTLrWxVzxVM38d9ppOiyKGoX1PTahfG0ORLdyg1+w8jFuGZP3PW6KanpFe9cnp4LLz+le9+peTtFHHB48s4PmLa/BluLtGXlns4jI9hLGTYpuycw0NEpWQtw0jUI+Gn4IeFeEPig0k/AGovD7APE0+DS7E/ALxIVgZyg14FauH+TZNvBqPD7FfJjwSmsOonLHCSdzUbr2omT2F3U6vU6fBo4YMeCaeng28kW3deNxC6jr/+Do1/23+Y16nR9mOPwdTqXjioJySgqXHoctXlX8xotPiXrO5v+Brvd7/tJDHo9TqNbodbHJhxxlGUUvhRa7vzKuo0acezPjabVpNblrSa3U5sGrhn1CjkTSx9iUWl5onR5pJTyZFKedSahKbtRXr9eTPy/nVl7/QTp8j6Zo1HqWR5U5J48bj3SgvVv/3RZnjWX/KMeRZsc91NeF6Cfh97byPvcuW97ExxZ9DN5dG7g954nw/yfuRfLnyz8X1AuKBPwot24pv6DITx5sccuOLja3i/DF4smdKSenUpW6cnSrxsYTH+Vlpjjhg3Sgn9EO+AlHeCSXCYqtTP9fN2L0xqv28h48McbbVtvltttmnPHJ6ttPiFlxvM8TklNK0m6v6DJY2uVsKz6XDn/nIJv18iFpdRgX+TaqaS/wBGe6/aXn/zs5fRGZtPiyJrJjT+qFrHHHBQgqS4Xoc9ZqMe2q0imv8AfxOn+D/MnHqNHqJKOPN2TfEMipl/+d//ABvQRJJXSKeoxvUZsOmXOWaT9ly/2JlzJs2n4dFPLFrLHLGcoThdNOmrDF5fYD1vKsmbMl+rBdiS8Jf32UOrT7fs50r3xv8AgHq23jm27bTtvyU+tZ8cvs/0qEMkXOMGpJNNrjleDpz7lqawk1bdKzei/wD4L0j/AP5U/wB7PPRez+h6Dp+o6Zn+zmDQ6zqC0uXHmlNr4bk9264+ppJ2UozlILuL60vQv/X1/YM56XoX/r6/sGL80GdAlej62/TAv4mW5mtpsnR9Bouow0/VVqMmpxdqh8Nxpq6397MCWRJNvwPUKtDpOny6zrekjiVrFkWWbfEYpptv933lj7W6TJHUy6hjay6XUNOOSDtJ0lTa+mwK1mn6b0F4dLnhl1utV55wd/Dh/u3671979il0brC6b36PWR+N0zNtPG1bhflL96+/kcn9D0t/aHqOLUdYx6vQZ1NQjBxnTq074Ymf2s62ntnw/wBghmbpXSZZJPD9otJHG3cVPdpejd8iX0bpj/8A6k0H4P8AMfNF7R/4u65x+kYP7BHf+LuuLd6jAv8AsIj/ABL0z/5l0H4P8yX0TpjVP7S6D8H+Y+aHtudH0HU8ep6l1PqMMKWq0jqeKSak6u6XqldniYcL6GyuidMim4/aTROlwk239FZjpb7ceNidlT8bLGPwVsaLONHPolrH4NLR8ozcfg09FyjDRtfHwvoOgJhwh8Co0hseAgVwEM3PZAvglgvZDgAxcmMbEz4YyIyySRXbTYeeVWVXPfkRHPgF7AqZzkmhgLZOJp5EJlOmHp3eVIWviWzgVtI2ce2D7jH0quSNjjA/ocfg/wBq0y8z1iW0jAk7NzrLpP6mA2ejhlfoZPkTJhTl4Qt7msSFtp7MmOaUfILBZXAsLVOt2BLVP1K7AYvzB2mTzt+SvKTbJZDKkCE6aaLunyXW5SoODcGmidTsJt4LklR6vouR/DSk90eZ6fDvxKXqej6cqSPOuvzvh5nXoYO0TIXidxQx8HVNdjf+lTMrRT4mXsvBSmqlZzeX6iruB2kXIcFHTvZF2HB0fx77ip8GwWwmwJukdnkvDKnKkAsm4GaVCFlVuzn6nq13+4Msi9So8tcMXLN7h+i6sZM3uVsubZqxGTN7lTLm9ybsunZM3O4h5LfIh5LJi7ZnddJ47C72NLT8ozNPyjU0ytorYjW0/g0cPgztOqSNHC+Dmq13HsPiytjY6LEZ6ZICYaGENAtIMhrYRK8kIyQTVFpoVKIuBn5INWVpL2NHJCynkhTAuK/YmC8SfgbRND6XFaWFPwInp1vsaNAuCfgctTxkZNMvQrz0r9DbljT8CZYU/Bc0XGJ8L4U06o2tA7girqsNY2643GaDIqSHr/LJT1WxjdVuMc68lZT2s74ls4dZadNyTb2Bxptg3bQ/HDdGN9D6fjhsNSpkQVIJO2QuD8APkLwQ2kEMuSK2VXZackyvlSdm2U1mzSWQ9N0md6dx52PN5dpG10XLdJ/Q9Lx31EZ+q+uh8PqGXalKpL71+YEJlrrUHHPjyeGnH8Hf7mZ8Z7leSHDZMW3vRzlYN7kScUbEbFCoeBsRmYkMSAQ1LcDSkT2hJE0ALr2IeNPwhrRFEUEPH7HKPsPqwXAw1ACMEnaSt8sdGCIjEdFGNDox9hqR0VsGkBoSS4VBpHJElcPjkiaOSJLikUQwgWMgSWxVy4sbabgm1w6LcuCvkHLxKrNlTK+S1kZSzM0ySnnaaaZ5zqLSnSSX0PQaiVJ/Q83rZd2Z+x1YTSU6GJr0QlMKzYju5eiIcl6IV3EOYAblQLmLcxbmHCNc64FOd8gOTbIsfCE6fgGl6I6yLAJpegSSfhAodjg5MVodGF8IdHG34HY8O3BZjhS8GVqplXhj4LUMfsHHF7D44/YztV+QY4bo0dJFporQhT4L+ljujGp4vw4HR5FRWwyPJUVD0EDF7BFGhgsJgSKBbYqb2GTaEZHsMlXUPkouVMtah7MoSluHCM7/AHOeTYQ5AynS5HImpyZN3uWNBPuymXkm2y90l3lZPknMVPfb0+jVtGtkdYH9DL0S3RqZnWnf0OP+P9raPIdbnSf1MFybNjrj3r3Mbk9LHxhfoWQEQ1sakCQDDYEgAJC2GwGMBZATIoAhKw8cG2kdGNssYYJ5EiNXgb3TIdsEvFG7pV2tNcGToMbpJI3dPjpKzxPN+rrsaZ+NHDNOKHN7FbFshzex1ePyX8+1k5mU8j3LWVlObI3squaZ8F6BnaR2kaMDq8F+Hn4YLnwMFz4O3yfDihqJNWU3Pd7lrVcMy5zak9zi1riKbLIJnk9wZTvyKk7Mr5E9DPK9yvKTkxrVgqG5H76Awi3yWccOAIQLEFwXKHhcKqjS0zVoz8KujS08ODfZxq6fdIv4uChp01Ro4VaRzVSxAfEVjQ5IQNQSBQSGEnNWccMAaFyQ5oCSAK8kV82NNXRbkhU1sIM5waOSGzjTYpqmSTqOaOTOsABoFwsazqH0cU9RjUsTVcoytHkcJ0+VszfnC4s8/wBjhrMkPSTNce5Yz1GvCdrkON2IxXSLWONmG5wQ7DG2WYRpoDDFIsJVTRyaaRPBKVbktWia2M1Id0JnJt0hz4EtXNsAVNuPkTPNapssZINop5cTbs6McTScsk2Xul5FGa3p2ZuWMo+BmhyOOVb+TvxPTOX29J1iHfo+9cwal/B/vMFOmelSWo0Tg91KLi/vR5VtxdPlbP6m19xp/ZrlsRGVsRPLSJxZLMzXoMdBlXHMfB2I4sxGxExHRAzEEkCglwI3Uc0EkTRNICVEpIKjktzKhHaHFbnJWElRjqAcQgYoNIUU5BJEIlFw0kkElw0MEJgsCoJ8FfI9h82Vcr2JSq5WU8z5LWZ7GfqJUmbYTVHV5KT38HnM0u7K2bGtyVF/QwpStt+p2eOIqWyLBb3Is14B2C2RYLYcDmwGzmyCgIhs5ENgSbIvcGwo7sKZuOPc6Ro6fDsthOkxXTrdmrjx9qRjqtMZBDHS4GqASW4aRlWvHRgMjE5IYkZ0cTGJc0y3RWii5pluiEWLcUGiEgiokcHsMFxe4dlBz4FyYbYtlQFyZXyPZj5FfJ5KKqOpezM5vcv6p7MzZPcciUt0hOSfgKUhT3LkTSsj8mn0dXJszJq0a3RY7WZ+f/8A50p9ep0S3Rf1TrAylolui5rXWCjj/jtp8eK627ml6syUafWXeVL3Zmno4+MP7RQLWwYL4NCLYD4DYEgBbAYbAYw5kJWzg4rcAOCpXRa0UHPOkkVvBpdKSUnJ83SMt/Dek0OFRitvBrYkkkUNGrijQxujjvjjaT0dFbB3sBFkyaoxuOfARllyVMj3HZpFZu2c+u9Jb0bNOD2Rk6R/NRq43sju/j34MnATWwS4Ilwejv8A1VGdqlszFzJqbN3Uq0zG1C+c8zzXiKrOyK2G9p3acfaRXaSojO05RLlCEh2NboFIZBU0bZoeHwR4NTTxWxm4FwjU0/g6NiNHDHgv4VwUsC2Ro4UtjCqPhHYYkdDgKhBKCRCRKRQEkccjqGHAtBAsYKkhUlsPkhM1sIKuSJXmi5NbFea5Jqaqt06OUgcyp2hSn6iHT7DTK6mNiwM2rTMDVrs6jLarSf8AD+BvxdoxOrLs1uOS5aa/B/3mvj+8Tr4uadWkXoQqilomnBGhAx8hZh2OI1LcCAyzk00FewSVoBRb5GqkiFFTtbLkFRa3Y1K3ZzQ5CLaQuUExrTBpmuSUs2FNMpwj2ZbXqa2RWjPzKpWjr8WuI1HounT7sFL0swOq4/g6/LFKk33L6Pf99mp0nI9l4K/2lw1PDmS5Tg39N1+9nXPcP+nn8srnVh43wIyJqV0MxvfciiL2NlvHIpY5bFrG0T1cXYOxyK0H7liLJ6ZyCQCYaYGNEohBE0kUdRJxnQKISQMQ0RYEoJEBLgmKiUSQiS4pJBxxQcwWEwWKlSplTK9i1kKmV8iQp5nsZmplszRzcMytW2kzbESxtfk+VoyWXtfK3RRZ34npCGyLOfBDZZushsizmwCCLObIbGEtgtkNkWPgTdjcKuSQpK2W9LC5oWvgjY0GJVb8IuOgMMVDEkTds5q6czkEg0gUhkURVDig0iIoNIikJIt6ZboqpblzTLdEIq4kSQiXwUlye4aewryGnsMkti29gmwJPYcBc2V8rHSZWyvkuJUNXLZmc3Ze1b2ZQbKiQvcitgkrJosiJLY2ejR+QyZI2+kxqC2MP5N/wKfXo9Et0WNe6xCtEt0H1F1iOf8Aj/G39PE9Wd6ivqZ9F3qjvUv6FM9DPxggh8EsF7Isi2A+Q2BIYLYD5DYDGHVYaQKCTFQKy/oMna0r8lAZhm1JURZ6N7bQ5E4o0IzR5rpurTSi3ujax5k4rdGLXOvTQU0ldgTyFR5vcGWbbky3BTMk7E3Yl5LfISZybyXVzSuppGvi3SMXTupo2cXCNv49PP09ES4CXBDPU3/qpS1C2ZkZ185tahbMyNQvm+88nz1NIrY7tCSDrY4yKo6hlAtFQBSGQ5BoKJvmk8Tg8GppvBlYXVGlp3wdOxGrg8GhidIzsEuC/idmClzG9hqQnGPSGEpbk0dRNFBCJOOGEMhok5gCpIU0Pa2FtCBEkV8iqy3JFbLHYmlVDOtrM/LNwbd7Gnmjs1Rl6mOzEiphmT8lnHkT8mL3uE6LeHPxbKueCVsQlaMzrMbnil6Nr/3+Baw5U0tyv1Rd2GMvRpjx6qr8Hok1BGpidmZoqcEaMNuDLy/U5WYBq26Fwe1jYcWctaGrgmtgU2wr2JU5W2S0SkqIdvYcAHyCwnswHJI1yVDNJoo54b8F1tNciMqTTN8fUaH0yfblo0us4fj9MyNK3BKa+7n9jZjaV9udfU9JBLJp+18NNP6NHbg8/HiZY7XApwcHwabwOM545LeDaf3MTkw+xnfpK+NlrHKkV+xxY2L2Jqou48nuWsc7XJmxdFnHNog19MYmVcc7HxYdM1MNOxUQ0xWgxHAp7hEUCXIaAXIaJMdEohEgcSiSESVFOOJOKCHwC+CWQxFSchTylyZTyrkUiKpZuGZGsezNfOtmY+t2TOjxorz+rVzZSexoZlbZSzRabOzPxJTYLZzZDZoaWwWzmyGxhDZDZ17gsZus7yRZMVbGDYK3waWhh86bXBTwwto09LGjLSpPbRiriSkdj3QyjF0QKQ2KBSDSJpmJDELQaM6mjXJb0/JUXJc0/JCKtol8EIlvYZUDYV7C73CTCJS2BJ7BSFSZUIubK+R7MdNlbI9i4Shq3syg3uXdW+SinbLiRpUjjr2IbGSHu6N7pkaxowVvJfU9H02P8mjm/lX/ABGfre0a4B6m6h9w3SLYr9UdRf0I8E/xa348V1B3qpexVHa2V6mb9xFnfn4wcDIIGTKhFsWxjYtlQAYITBYBBKZBxIGmNxryKW9D4qkiaZ2ObhJOLaaNjSa1TilJ00YsRuNtO06MdG9D8bbkF5vczsed0k2M+JZjqn1dhNt8liDszcM22aOE5thcwbTX1NvB+qjDxbSRt6feCNP431efqylsQwlwRI9jU/xNVzr5TI1C+Y2M6+VmTqP1vvPF/kFopIJLYFINHJCRQLQytgWioANEpEtHLY1zSeEx7F3Tzaa3KsUPxOmjr0Gxp5XW5p4HdGRpnwaWB8GFU0cbLEGVcTLUBwzKOJXBzKCAWSyHyMnEWQ2Q2AS2LZLdnCAGIyKyw0LmrQqSjlhszO1GPk18kbRR1ELT2EmvParG020JxTp02aOqx7N0Zck4T9jXN7OJ+NPT5OEO1b79M17FDBOvJccu7E17C5yqhnTZXBGrFWjE6fJp16G1jdpGXlnsodBNv2HpUhWN7Dk9jlrQcSWwYslbuyTFbOs7aiHwOQy5ySVlPJlbdJjdRJ1SKptiIo/iOuSHO0wAWbRIsc6yp+56TQz78SXqjy6dOzb6XltLfbg6c0ZB1DD26tzS2yK/v4ZSnj9jc1+PugpVvF/sZmTgTucqqz54/YW4UXpwEuHsR0dIWw7GwHGmTF0yaa1jdFmDtFOD4LEHsQcWYsNMSmMTEZqYSFphJkgyIxC4sYmBwSJRxwzSiTjhxSTjjioQWQwnwCxAnItirkRbmVcqCIqhnWxk6yOzNnMtmZmpjaf0N8VNedyQ3ZVzQtPY0ssPmaK2SGzOiaSyMiabBstZ4clRqmb5vQ5sFs5shssIbIbJbA5GaVux+KDdAY4Wy/p8VtOtidUQzBjpJ0XsKpgQgklsOgqZla1k4tYx6SaEY+EOTM2sEkSjkyVyKmJBJgphIipo4su6fkox5LunZnWdW0RJ7AuVIBytiTRBJ0LTCTAkti5Ow2xci4CsjK2R7MfNlbI9mXCZurdWU09yxrHyU09y4k5M6wLOsZGQVzS9z1HT4/JH6Hl8G+aK9z1mhVQX0OT+VfUh5+trSrYodWlSf0NDTfqmV1me0g8P+rS/Hi9TK8837i7Oyu8svqyGzunxzpIb2OsCTKAWwGEwWUAsFhMF7CCDjmcSBw3aRYRWx/rIsoi04NDIC0MjyZ01iPAxMVBjEzDRrOn5NPF4M3Trc0sXCOfYi3je6NrSu4IxcfKNnSP+TRf8f/Zc+ri4IkSuCJHt6/1NXzfqsyNQ/mZr5nUWYuqfzP6ni/yC0GLDXAmLGpnHwhkMiyE7ZUCSGiTjSB4VD4JpWKirZYhG4s69GuaZ7I1MD4MnTukjU074MKGliZag9ipiZaxvYIZ6exLIjwcywhgtktgsZBbBbJbBYglMkFHWBOkA1sGwWAJmtipmjaZekitlSdklWRqIWnsY+pg03segzxuzI1cFbsvN9oqjhnTLsJ3GjOT7ZtFnHPY01Di3omviNe5t4kqPPaefbn+pu6fImkZeWCfVxOhqtio7jVSOSxoNOtg1wLW7GLgXDcd4Is5ukOQ1fKrsqSjTLzV3YqWNMuekVUaBe5YeN2d8IuaLipNNFvpebtydrfPgDJj2YjBL4eoTfqb+Pafleua+JgaflUZkobGhpJrJgVO6K2aHbkkvc28nuSrUpwEzgXJRESiYEqSiLaplmcaETQh1MJFjHIqJ0xsGRVLkXY1MrwdjkyFGphJ7i0w0yQbFjosQhsQhnLgnkFMJFQ0olEI4qKSccjioSCGSyGKgqasrZEW2hE0LqVDMrTM7UR2ZqZVyUNRG7NM1NYOoi1NlfJG0X9XCnZSkjeVKhnhszPyRps1syTTM7NHdm2NEpyIsLIqYps6oEtkxVsFbj8ULa2C+gdgx20q3NbFjUIpVuI0eHiTRdUTDVbZiEg0mSkGok9aJgxqYCVHXQjPUglIr9xPeILCkEpUVlMLv9yKi1ZU9y3p5GYsm6LmmnaMqztXnK0SmKTsYuBEJE2DZ1gBN7C5PYKwJsqAnIyrmezHzZUzPZlwmbrJblK9yzrHuylZrPiT1Kwk9hKYSYwt6NXqEeu0aqK+h5Ppi7tSj2GlVJHD/ACr7PLW06rGYPWp1Gf3m/i2xfceY67Osc3fhmni/1itfHk5SuTfqyLAu2zrO6MRtgtkWRYyc2QySGARIFhMFkgLOOZwqBY9pItIqLZlqLtJmejhiDQtBoz6ZsXQxPgSmGuSKbR0+6RpYuEZum3SNLEtjl2ItY+Ua+jfyGRjXBr6L+bH4P9lz6vR4IkSuCJHuX/Q1bUP5TE1b+Z/U2dS6gzC1b+b7zxvOWwwYxMrwkGpnLxJ3cEnsJi7GplSAZxyORcN4rGty1BbblfGWcdHXozMCp17mpg8GZjVTNLA9kYaDQwvYt42VML2RaxhFLEeDmzlwQyicwXwEwGBBaBfITBYBB1nENiCbBbOsFsCc3sV8nDGt7CZvZioU8y5MvVxtM1MztMzNTwyomsTM+3IFCYOq/XFwlR087EruGX8qm/J6HSq4o8xjn86fuei0U32K9zHyT0GjB0PTTQiG6Gwo460hsXuFYtbB2Sp1pAt7k1ZyivQqEFtENoY4r0IcV6FADojbwE4bgOLXAgDItmZmeThks0sjdNUZWre5XjvNI09J0XP340m+UXdTH5k/VGB0DUVNQvez0eoVwteNzu+5sOe4oyQmcSxJbCZIwNWmivNFrIivMRENBQYLBumhURcg6HxdlPHK6LMGY1cWE7DixUXYyLJM6I1cCosbEZjQaYCCTKMSJITJRUUk4gkqBwLCBYqQZCciHPgXkWxKapZEUsytMv5UU8y2Y5U1kaqFpmZNVZtahbMycyps6M0lLLwyjmXJeycMqZVaNc32ln5kV/JayrdiO3c7M30ToRtl/S4nOS2EYcbbSRsabEoK2tydaXmdPxwUYpLwMSIQaMXRI5IJI5BoRhoiQbAbGKFsBy9yZMW2CaYpk9+wmyHIms6cp78mjo3aMdT3RraF2jDTP+2hF7BpiosamJQ0Q2RZDewB17gTZLYE3sVARN8lXM9mWMjKWd7M0hMzWPcqWP1ct2VLNp8SYmEpCVIJSHwNjoy7s9nsNKtkeS6DG237nrtKuDzP5N/zPDTusL+h5Dr8/wCSmeuyvtwN+x4b7RZKg16s6fF/R7+PP2TYtM5M7WRiZ1g2dYgKyLBsiwIdkPgGzrJDmcdZxIcizidxRWQ7C969SNGsINAINGdppT3GJ7gErlEU2ppuEaWIzdNwjSwnLsRbxrg1tF/NmVj8Gro/1EivB/sufV2PBEiVwRJnt2/4K/tT1b+VmBqnv95u6t7M89q3Tf1PH86NAjIYnuVYTdliLOeJPgOQiI1MqGYmSgUEuC4bxMJUWcT3KMWWMU9zs1DX1tNe5fwPZGYpqky9p5qkYahxq4XwXMbM/DOy9iZKlpcEM5O0cMkMFktkAQWCwmQxEF8EEsBsA5sCTObAlLYQDKQmcjpzoRkmBFZpbPczdTLZouZppJ7mZqMnO5eYms3VO5iUFmdzbBR1T4kyDaaPQdPncUrMGELNXpk6dPlGXkno69Bj4Q+Lor43cUOXg4bOKhqZPkGPIaIUklEElQ0nHJkNl8JDAbR0ntZXyZaDhGTSaZma3FdtFqWZ0VsuS00ys55eo1S+lyePUq3W57PE/iYE/LVHicTrKmvU9h06fdgSu9rO3J4LmhM0W8yqbrh7lWaMdT2pWmVplrIVZk8Ihi26DmxTYWA7HLgt43aKEHuW8cjHUUtxYxMRF7DUzNR8WNiyvFjYtAZ6ewSYtMNFQzEyUwEyUyjMOIOKNJD4OOfAUAYE+BjFz4Iqaq5EU8q2ZcyFTLwxSpZ+dcmTqVTZrZ/Jl6pcs3xUVm5irNbMtZSvNWb5JTyRsWsdstShZMMbb4OnNEHpcKW7RoRTYGLHSSLMIk29b5nHRi2NUAowoYokrK7SWqHdvsQ4B0yXwKk6HyQiSGOlti2w5CZOhpqXJIW5+4E5iZTt0KstVYg7kmbehfymFhe6+puaF/Kc+2caEWMXApMYmSsRDZIMgAGxc3sHIVJlQE5HsUM0tmXMz2KGoezNckytXLcrKQ3Vvcqpm+Z6SbZKYCZNjKvU/Z+H8kn6nqdKt0jznQ4Vpo+6PTaNW0eP5r3yLz8WtW+3Tv6HgPtFO2l6s951B1gaPnnX53nSO7xfYXkZFhJi0wr2OtmZZ1gJk2ICsiyLIsAmzrIOJJNnWQciTEmMxumhaDRNJbTtINCcLtUORjVQSDjygEHD9ZEUNPT8I0sPgztNwjSw+Dm2cW8fg1dH+oZWPwauj/UK8H+y59XFwdIlcESPZv8Aor+2frHszzesl87Xuei1r2Z5nWP+Vr3PK8zPQIclnGyrFjsbOYluLGJiIO0NTHKZyewSYKexKLlDwSY2EqZXsOMtz0bFNCM7iW9Nk43M2E9uR+nyU+TLUDfwT43L+KZjafJdGlhnZhYqNGE9g7srwnsMUgMbdENoFysFvYCG2gWwHKgHMRcG2KciHP3FSmIhylXkRKZE57clfJkryAFkn7lXJkryRkyc7lPNl9ypOpdny87mbqcmz3GZs3O5n5MjnJ+h0YykN72wo7sBMZBWzawLOFbFjA3jzKvIrGqQ5xqpLwZU+em/p8lxVluLsztE1KCdmhFKzh3PYh8OLDTAjxQS4MqsRyZB1bFQC8EMhWiXuiwXkexRyJ2y/KNoTkxX4DpWKE7SETtsu5MTXgS8Lb4D98TYrLZpno+k5rgt/ZmHPFS4LvS8zx5VFvZnT49dic+q9DqF8qfpsUps0JL4mB15Wxm5GVue+tScjKeRlnIynmZCaTN7i2zpSoU5pvkfCPxljGytjdss4zHUWtQY1Mrx4GpmNhw6LGxkITGJiUsRkNi7K8WNi6ZUM5Epgp2ghmNcEgolMYSQySGOmh8C58DGKnwZ1NV8nBTy8Mt5GU8z2ZMSoZ/Jl6ndM0875MzUPdnRhLPyCGh2R7gVZ0xJShb2RZw4kq23CxYqVsdGO5pGmYKER8IEY4j0gaxyQaRyQSEbkgWkEC2ALkkyvkVD5OhM2VBVeaKuV0WZuilnmkmOM7VfJkraxeNtuxc33S2DxqirPTGruHwbmh/UMPDyjc0X6iOXYi+hiFxGIlQwZBIhgZbQmaHtCZouBWyooZ42maU1aZTzQ2ZpCYOqhbKnY14NPUQuT2K/w16G0vpKpTQUN2l6se8Xsdiw3mgq8odvpNew6TFRwQXsej0a4MLp0KxxXsb+jXk8a3vka5+A6pKsVex8661O9VXoj3/V5VBr2PnfVJd2rl7Ho+L6nyfVDyEC1uSdTNKZJCJEHWdZxAgk444RJJIORNAkGgUg0RQODppllO1sVoplrDFvZmejEk2OxQ+ZErG/QdihTMrTXNOqpGjhKGFUX8KObRxbx+DU0n6iMzH4NTSfqI08H+y59XFwDJ7ErgGb2PW1f8VM3WvZnmNU7yno9c9meb1G+VnmeZnoEWPg6EIbBnLSWYMcmVovcfBpomUHxYaYpMNOi+m8EdYfYC4NHrqMhPYPHkqbViUqBcu3ITYG7pMlpbmrhnsjz+indbmzhnsjDUONTHPYap7clLHMfGRnYo/uOchfcR3CCZPYRPJToOb2ZRzZKb3Diae8nuKlkt8laWZeomeevIcT1ZnkpclXJm9xGTPtyVcmf3LmSpuXNvyVM2bncVkze5Vy5W20mbZwTs2VzbSewqyDrNpJCEixhVsrxLmGOwqaxjRYjG016icaLGNGdUs6DI18rfDo18bbS3MPH8mdPhM2dPJNLezk8s9p+VciHEXDdBrk51pXNhEIlclQJq0C00MSOaNAWwasY4gNNE0FSgn4FShT4LDFMzv0K8o7bicT+HmT9y3NWipOLU7NvEzr1Ojyd+Bb26KOqXw8so+LtfRhdJyJwSv2D6rClDIvo/4HX9yufGXkkVMstxuWZTyzomZSXkmIU7nyRlnyKi9zT8+iaGKZbxtNGbjnVFzHPgx1lUq9BjUVsc+B8XaOexZiYxMTYxMimsQY1FeDHxkq3FFQ6LoNOyvdvYdjexUBiCBTJTGBHEHAYXwKyPYbITNmdTVbK9ilmezLeZlLM+RSFVHUPkytTLc0dQ+TJ1Et2dXjiKrvdh4sdu2RCLlL2LMUkkjpOTrkhkFbQKQ2CG1hsUNSAihqQKdRxJzAIb2FthSfgXJjBc3sImxuRlXI6KTaVkmUNRO9ixmnSZSm23uVGeqWkOggEtx0EGkH4dmjd0S/kzExLdG3o/5tHNoRdjwNQqPAaZCzUcwUyWMBfIuSGMBlQEyVlbLDZ7FxqxOSFouEx9RjuT2E/D9jRzY7kL+F7GkpcUfh7cB6bDeoht5LTxew7R4f5dOuCdX1SsbmkjUFt4NvSKomTp1SRsaZVA8zxzu2kZnWZbPfwfPNa+7VZPqe961LaR4HUb5pv3PT8X1n5P8AZXa3ICaIqjoQ4444QccjiUiaTkiSaJSJtCKJSJolInpuS3Gwg2TjhfgtY8fGxndAOPH7FvHiqtgseP2LWPH7GVo4GOO1wMjCmNhANQIp8TiiXcK4K0I0WsSMNKWoco1NKvkRl4+UaulXyI18H+yp9WVwBk2TDXAvK/lPU8n+qmRrnszz2f8AnWb+uezPP5t8jfueb5PdZ0KGRAiEjn1CNTHQlRXTGRZlfQW4u0GmJxy2DscpvHWiaTOaOo9lQXC+CvmTUkXEJ1ULipLww6VO0cnaNvC9jE0a3Rs4VsjLf1UXoPgsRkVsfA+PBlVG2c2CcLhum9mY2uzdk+TWyOos811XLWRehcz1npMtQ/ViZ6i/JQlnt7C3mZrMI6uTz+5XyZ72TK7m35AtmkwByyN+Qbsg66L4STkRZKe4+AyCtl7EqSKeFWy9jWxnpUPxosQQnGh8DOqHOLeO1ytzR0T7oJ+pSx7osaKfZJwb4exh5Z6TfvWrDYalYrG00mNTORSUrYxLYBcjEXAlIlI5Ik1AXEFrYY+AG9hWBXy7K0VJZKe5Y1DpMz5NtkXPsqcsya3F5JpimmBJNMvOeM71r9KyVOrNjWY/jaOcVzVr6o81oMvZlSflnqME1OHrsdOV5+PJZJ+Slmye5d6tB6bW5cfi7X0e5kZJttmsym/QydslbEUSiuA2DplrHMpJ0OhKjPWTaWOfBYhIzsc+CzCZz6yqVdTGJlWMxikY3JrKnQyEm2ivHfyPx7GfFSrMOB0BEGNixwzkSD4JsZiRxCfgkAGTETY2TEZGRSV8zKGd1ZdyPZlDO+RyJrO1MqTMjK7mzS1bdMzHvJnZ4p6SZBUhqFRYxM24uGxVjooVAdHgFmRGLgWgkxGMhsiyGwCJMTJ7ByYmcqGXS8kinmnyOyzpOilklbLkRaTkbbFMZIWy0VCW42AtDYE1Kzi5RtaTbGjFxPdGxpnWNHPqexPq6mGmJixkSVnIJLYCIxAAtANDqIcRyghoGStD3AFw2LlClOFvgD4a9C1KG4LiUFZw9h+ix1kbolxQ/Sxq2Ru/4hfwrg18CrFZlYVbRrQ2w/ccXhn+ao8/1h2ps8Rlg3Nv3Z7TqzuM/vPKZce7Z6GGWvrPaAa3LOSFCGqN0AZxLIYByDSAQyK2IoSkSkElZKRnaEJDIQt2Co20i3hgRq8MWPHxsWsePjY7Fj4LeOCRjafHY8ZYjAmEB8YEq4GMPYLtGRic47io4iC3HwQpLcfjRjoLGNcGpp1UEZeNbo1sCqK+hv8Ax5/kqfTnwJzP5WOZXzv5Weh5PimPrntIw5q5M2ddLZmO+X9Tz9/WdAkEQd5MakaDiLTGJmWoZ0HQ1PYQmNi7RMN5aiKGUc0j2ll1udNd0GmEyGhFR6NUzYwrZGXpFUzWwrZGWvp5Wsa2HRWwrGth6WxFWmqIYVHNBARm/VZ5TrH84vqerzfqM8r1bfMkaY+stsuiKDaBaOhIGC2G0LZUJ1nEWRZXAKwlyAHDdhYFrAtrLkEVsK2Rbxrgxq4fjWw+KFQQ+KM6ZuMmX8nkjO6t02djXA3Lj78TrlcGevfoWdjQ0824J8lmLszdBk7saT8bGlF2tjis5ShqCTFph3sXDGmSmAjrZpAJuhcmw0rBktgClqHyUm9y/njdlKcGnwTamo+pDVkbhJWwmuFxEPkmmvDPQ6DMppepgODqy503O4ZVFs2zronpH2q07vDqUtv1JfvX8TzTR7vqmFarpmWKVvt7l9VueKlFPwdeb6GiCUG4EdtFEgNOgaYcYNk2A2EnsWsbK0ItD4OjHZxagxqZWjIZGZhYqLcGWIMqY7ZbhwZ2KixBjouivBjk9ieKOTJTFJ7hphwDT3Cb2AT3IbFw0SYjIxkmImxcJXyPcp6hclrI9ytldplZiax9WtmZr5ZratbMypKpM7fHPRCQcRaYxM0WdBjosrxY2MhGsJkpiVIPuEfRNkOQLYDkPhOlIRkYcpbFbNOk6Gm0nNPwVm9wpu2xbZcZ9QxbCbAbGSU9xsRKY2AqSzj5Rq6eXyIycfKNLTv5UYaEq9jdj4MrYuCzAzV06IxCoDkI0pE0ckFWwwGiGkHQLRUpkzjuKaofJWLaKlMposadVFe4posYVSSI8v8AqFzAt0aj2wfcZunW6NLLth+45fB7tOPN9T3T9zByY7XBv9Q8oypxO7KKyc2Or2KeSLT4NnLjTTKGbHzsaSs7FBrcBodODTFMskIbDdCfI7GTr4DUgkjkgkjC0JhG2X8MCrhW6L+GKpGWr7OLGOHBZxxF40WMaM1wyEaQ1IGK2GxQGlLYhoYlsBIKEJbjoIUluOgjGksYV8y+pq4lUUZmBXJI1Ma2Or+NPa4N8FXUOky0ylqnUWdnlpsTWy5Mxsva2e7XuUG9zh19ZBbIs5s5GdhDQaAQyJFhjQ2LpikGmZ/kPPtEMJ7IFnrRqA45nN7CqVjSLezVwrYzNGrNXCtkZVUWoDoicY9EKSQySBwE6j9Rnk+qO856vUuoP6Hkte71L9jXP1jr6ptAtDGC0awiWhUuRz4Ez5NYA2RZzZDLAkxuNWxUSzgVsnQW8S2RbxoRiXBZgqMKs+CHxVCYD4GdM2C3RZilVCID4uiDIi/0fUtf6Mt0aWLImk0zO1cXOCceVuM0ea4pN7ow8mf7Z31WqnsMT2K0J2kPT2MpVGRdnAphFdAgZHWC2P8AQJyKyvKFlpqwHHcz1QrfCTOWFrgsdgSRHaXFPInFCMORwzpr1L+SCaKGSHbktG/ivtN9PU6WayadX6UzxepxvFqcmNquybX7T1PScndiSb8GN9oMDxdSc0tsqUvv4f7jvxfR33GU0jlGwlCxqgkVbxBccd+BqgkEtjm6JttNDSOtIFzITsX5M5Mdj3ZWgWcaIsVFrHsWsfBUxsswexjYqHp0GpCUw0yeKOg7Y5PYRjdDUyeAdgNkOVC5SFwOlL3E5HZ05iZz25DgKyPcrzfIycrYmbNMwlTUbpmVkjUmaed8mfkXzM6cJJWwaYLVHWaGamEpUJTJ7hKWFOie9lbvryd8SvIcLqz3gufuV3k9wZZNuQ4Vps8iS5KmSdsjJkvZMS5lSItdJgNkOQDZRObAb3ObIW7KSZDdj4LcTBDomegfDku4HsilAt4XwY0NHEyzBlTE+C1Axqj4DkJgNTJ6ZkQgEwkx9NIMibIbLhltAtByBe44A1uOxrdCvI3Fu0R5r/iF/TK5Ivah1ia9inpVc0WtW6xmf8efVR57X7tGfJF/W7zKckdcRVXIirlhfgvTRXnHkpNZmWHNop5IU2amWHJSyw5LlTxTfI3G9xc1TDxumGvhLUeEEiIO0F5OamfhW6L+HwUMbovYHZlr6cXYIsQRXx+CxAmLPgthqFRYxOgBl0gHuybsEWgKKGxFpDYIyoWtOrmjTgqRn6VfMjRjwdv8WKjpcGdq5bMvzexl62VJm3kp1h6yVzoptjtVK8jK7Zy1i6wkAnbDRPANcjIoWg0yeGYg4gJhJiuTYLAbO7iG9jvWhsFvagXIHuuSQVNaWkVJGniWxm6RUkaeLgwq4tQHJCYDkSaSGccxwKuqdQf0PJ6t3qJfU9TrXUH9DymZ3lk/c1yy19LYD4DBkjWAmfAjJyWJFea3NcguiGE0RRYFDkuYFsVMa3L2GOyM9CLWNFmCEY1wWYIxqjYIdFCoj4ozqjIOhvckrsUuBOoy01FPkkLWF98/VAzh8HUWtoy3+8LSKoJvlh6pd+JpcrdCs76K/FzC00mWE7MjS6h0oy2aNHHO1ycevVTKsJ70GuBKe43u2I/RisiQLe5F2H6Nz5Oo7jc60LoRR2yObIF0ByVRR1CLs3SKWd2XjXKjS50jLU0mx32iw9+LDmS/VbTfs+DL0Obszretz0Gsh+ldMyJK322vqtz0cUs3seVqjjmwHIsht+4ty9wXNC3KypkhuRKluKsKLK4cWYMsY2VIMt4jLSotY2PgIhVD4Mxq4amGmAmGkTxRuNjHLYUnSOb2JAnL3FSkRKQqc6XIcCJz3EznsBOe4qU7HITpS9xU5HOQqctjSQFZnyUZ8ss5ZclWTu2bZiSpMCyZ8gN0XAKzu4W2Q5DBrkC50KcwHMfC6a8gqeR+opz9wHIcibRufuC5gN2C2PieicrBbIsixgVhRQC5DiKg2I2L2EobAzoOg9y3h5RUh4LWHlGdDQwvgtw8FPC+C5Dwc+jWI8Bpik9gkzPqjEye4VZ1jlBykTYnuonvKlA2wWwXMhyLiktljCrVlRy3RcwcIy899BpaNfMN1jqBGiW9+xGtfyleCcyr+mBq3eQqssah3lYhnTElSQiaLMkKmhpqlljdlLLFbmlkjsylmW7KiazcsabFxdMsZo8ldKmX/RLmN7DBOJ7DLOez2DsbLuCVFCDpljFKmjLUDWxS2RZgzOwzLsJ2iFyrSYakV1INTsDPTJW7Fp2g47kaI2KGwFRQ6HJkcXtKty8uCnpVsXPB6P8AHnIqF5HSZj6+ez3NbM6izC6jOk9w3S1WJmneRim7InK5v6kWYshLkamKQaYjMTCTFphpiMxMYmJQxMRvMuZDn7iu87us7+KG3Z2LfJYDlUW7D0yuSJ18JsaVbI0sS2Rn6ZbI0MfBzVcWoDUhUBiYjSQyfoC9yoFDqDrG37Hl57zb9Wek6pKsT+h5trc1yyv0NANbDaAfBpCV5iJ8lmZXnyaQwUdQSRyRXTHijckXsSpIrYY72XMa4M9U4sY1wPiJgh0TKmbBD4CYD4kUxN0m34KCbzaj2THazL2Ymr3YOihUO9rdhITQxvsVEZJqm2xTnTEajJSSTFIQ3d98eS/pcvelvuZuHJ4fA5SeGSnH9V80ZeXH6hWNmD3Q26KGHUqdNMtqdqzgs4IY3Ss5StC3NPYlNUIzE7RDRCdcAt7h0Oba9zlJVud5OdNAESaaKmZWmPm2hMnfJU9FVKFwypt+T1fTprJp0nvZ5jJFKVm50fKmqs7vHv1EZ+vPdRg9Nrc2JukpNr6PdFN5L8m39q9O4Z8Opitprsb91x+w87Z25k50ans7uOsT3EqRRHWFEQpDYSJpxZgW8a2KeN7l3DwY6VFmHA6IiA+BnVw2IxC4vYNOiFDboFs5sXJgESkV8k6Qc2VsstnuOQi5z3e4ty9xcp7gudIqQDcxOSewMpiZzvyXIQZu2Ikw5MTJ8mkIE2KbJnITKW5pAPuBcgHIBzHwhOYDnYDkC2PiKJsGwWyGyiE2RZFnWAScRZwASDiLQxMmgxMZEUmMiyKD4FvC90U4FrC6oyoaOF8FuDVFHC+C3CVJHPo1hMKxSkHZkYrIbBbBbGBORDnXkW5C5ToqA/4hPeU3kp8krIaQ+rXdc0rNHT8Ix8U+7Kja0/COf+RffDla2iXytide6H6RVjsq6+XJ0eKcyr+mFldzf1FsmTuTfuCzYkNCpDXwKkgBGRbMpZkXshTzIqIUMy5KjVMu5VyVpLcuFU42PTEY9mORlr6Qovcfje5XQ6BnYOL2GdUXYT2MyEmi3jnsRYa8phxluVVMZCe4uH1cix0HuVsbuizj3Zjo1iK2HY1uhMUWMatog1/Tr5VsWfAjAqihzex6Xi9YXPivqHSZ5zqc6TN7VSpM8v1TJu1fkz3UbrNbtkim6YSdkMjUw0xDkHCQlLCYSFxYxMRjTCTFoNDN5DuJTAslM7uGKb2SLWkVyRTW7svaRboz2TY062RoY1sUdOtkXsfBzVpFiHAwXDgMRuOOYLezKgrJ6tKsT+hgG11iXyUYprn4xv1wDWwYLNICZorT5LMxE1uXABBJW6ISGQjbGpYxLZFqC4E44liCMrTNgh0QIoZHkiqNihqdIXBHZprHicn4RIqlqpvNqVjXFmjFKEEl4Rn9Og55JZWuOC/Jjv8AxIZSpNmfmyXPktaifbBmRky/NY8zpL+PJVbl7DkUlT3TMOGbfkuYc3G47DX2p6efdFt43+wv4dV3xSRSxZVONPdPwQ8U8L78e8fTyjl8ni/XuFZ/xsQk2rYxMzsGri1Te5YWZS4ZxWWF1aUjm0yu8nuT8RBwdPvbYByYvvI716jmR0U26ETbDc0/IDaZcynpGSTLXS9R2ZUm9rK84pi8acMia2pm2JxPfb03WNP+m9JyRirml3w+qPBNuz3/AE3OsmJJ7ujyPX9E9F1Caiqx5W5w/ijt8OuzjS+4zLO7gGzrN0GqY2EyqmNgyaa/iluX8L2MvE90aOF7Iw0uLkWNiytFjYsyWsJhJiEwkxGbYLYPcC5AA5GVczqLHykVNRKoNlwqqOe4tzAcgHL3KkLo5SFORDkLci5CdOQqcjpSEzkXIA5JCXK2TKQps0iaJsW5HNi27HImpbIsiziuEmziLOsAmziDgCTjjgCUw0xaDTJoNQyIlMbFkULEGWcTKkCzjfBnQvYnwWYyKWNliMtkY6hxaUgu8rqQXd7mVgOcwXMW5gSnsLgE5ipzBlMTOZchOlk3OWX3K8p7gd5rmDrV0Mu7LZ6HTrZHnekq52ej064OHz/78Xn42NOqxIzOoSpS3NPHtiX0MbqU6TOvx+sxd+Mi7dkgJk2aklsVJhNi5PkAVkZVy8FiZXyDiaqZEV5ItzQiS3LhFRVMYkCkMiiKEpDYoiMRsUQExVD4NoWkGtmLg4epsbiduyqmWsCsnU5CXcXCLmJFXCuC5jWxy36Z8SxhVtCIotadXJBPql/EqQU3SZ0OAcjpM9HPrC2drZ0meU6lO517npNfOkzyetleV+xlplukNnRlvQDdsG6ewuIObJjJpi7tWSmLhrcHfA5MqY5U0WL2slUMUqDUiv3hRmAeYaaBbrYbHdUDODT4O5VRBWzR0i4KGNbmnpFwZbEauBcF3HwVMK4LmNUcyz48BAoIRuYEnSCYE3sy4VYPV5W0vcy2aHVHeVIoNbm2fjFAMg2gJFwyZrYrzW5amtivPZlwBSHYY27FRVst4YbBTPxosQQvHEfBGNUJIYkRFDUiVJiqKXUcjUI448yZeeyMtJ6nqNcqL/cGfvSrQ0uP4WnjHy1bDkyW62SFZHSbF/ZKOvyVGjInO3yWeo5rnSfBnOTOjGfRHwyU+S3hyu0ZsE29kXtPBt8BqF1q6ebdGlhk9jN08KSs0cLrk56fR5tLHL82N9kv2MR3Z9OqyQdeq3RdUkg1Mz1ma+jnVGOsTfIxalPyhuTS6bK7niV+q2f7BM+l4mv5PLOH1pozvglT+a56lJ7NHfpKe9iJdLzreOeEvqmgJaLVx4UX9JB/4cLlWP0pJ02MhqIvyjKy6bWR3eKT+lMSsmbG/mjJfVD/APKp9vQKSfoFVsydPq7aTZq4ZqaVMy13P040en5HCa3LX2g0X6b0yU4K8mJd8a81yvwKWnTU19T0WnXdhSluq3N/DrtaZj5c2A2Xer6daTqeowLZRm6+j3X7yg2d6LOXgk9xkGITDg9yaIvYnuaGKWyMvFLcv4ZbIw0qL0ZDVIqQl7jkzNawpBKQlS2J7hcM3uAlIBzFzmOQJnP3KeryfI9wsmSvJQ1WS4vcuQrSnMFzK/ffk5zNOEa5gOYtz9xbmVIBSmJnPYiUtxbZchObAbo5sW2VIlzkDZDdnFlU2dZBwEmybBJsAI6wSbEE2dZxwB1hJgnIAamNixKYyLIsCzBliD4KsGWIvYzoWsch0ZFSEqHRkZ2BZUwu9UV1ILuM7Ac5C5T2BchcpC4BSmInImUhM2VIQZSA7tyJyFp219TSB6Po8agn6nodOraMPpMaxI9BpVbR5nkvfI0z8ab+XD9x53qc6i/VnoM7rE/oeY6pPx7nfn4vSipE9wpM7uNCMbFyZFgtgAzYmW4yTFNjSVNCZLcsNWhTQApLcOETqoJcioGqQSYBKJ4DU6JTF2cmHAcnbRf0y2RnY92jV0y2Rn5PhLmJcFuCEY1sWYrY5KZkUXNOt0VIIvaZeS8TtUuJUhWZ0mN4RV1MqTPQvzi2L1KdQe55fPLum37m91XJUWedk7bMfrn1fYWwWSyGMkxdbMKxbCTfkVgOg9x6e1FfG02htkWGKyU2iFRIKYGN7lpw7oWVYLcv4F3QpnXa0k6qKNOjT0i4Kc4VP7y/pFSRjupaOFUW4FbEti1AxWcuCSFwEotiAWwMj+Vje1+gGWD7HsaQq811B3nr0KtbF7V4ZSztpFR45R5TRrKyJaBaoc0LmioCJK0Iktyy0Jmty4Ycatl7FGkivhjb4LmNbE6OGwQ5IGCpDYozqhRQ1IiC2GJEKJ1E1jwZJeUtvqVOlY2lPK+XsmH1Ob7YYly3bLWDGsWnhCt6tlf0TpFXUT7YNlmbMzqOTtxvfwPM7SY2pyd+VsVCDm+A4Y3OVtcs0tNpOG19x03UzE2k6fSt1tsaWHTqKW33j8WFRXAxtRW5za10cRGKiNWRLyUsuoS4ZWeq35J5aOtdZlfIyOa/JjwzNtblmGSx8NqRyX5GKe3JnxycbjVkvyBrimT3lZT9yVP3KkB7khWSEZqmrAc9jlNFcTVPUaaEXcVT9huhyOM1Fuws7TiVITcZpp00yPJ4puIj1mCGyfqrNrA6xpGD0nUQ1MFCU0pJUrNfKs2PDJ4kpzS2V1Zx4zvNdEnp4j7UTUutZmt1ST+tGI2XOpTyy1mV5045G22mt0yjJnp5npjv669xkXuIT3GRY6S3je5dxSryZ2N8FvHIw1DaGOY9SKMJUWIy9zKxUWVI7v8AcSpHOYjNcxGSdI6U6RXyTKkAcuTncztVktcj809uTO1E7fJtmBHeQ5iu4iy+A1zBcgGwWyuEJsW2c2A2PiXNgNnN7g2USTrIOKCbJsElcgEnJNhKLY7Hib8E2yESothKDLccG3AxYaXBP7Pij2P0OcGvBeeFehDxL0F+hxRaa8EFx4bXAuWFrhFfoEobFguDXgJIVI7GyxF7FaBYjwRQamNTFIKJFI5Owu4UmTZFgG5bANnN7ANi4ENipsNsVNjBUmdDeaXucwsCvNFe5XyB6vpkaxL6G7o1c0Y3T1WOK9jc0S+dex5X3bWfFnVusTPJ9Tnc0r8nqNfKsVHj+ozvPR6OT0R3HdwrvOUrNCNcgWwLIctgCWxbZDkA5bjKjBaJTtHMQLZ0TpcnRACRNEpWxijSEC0mwlAOkTYgPDC5I1sEaSM7TK3Zq4VsjDyUlnGth8RUFSQ2JzmdjW5o6dUihjVtGlhVRRr4Z/kqfTW9mZ+rlSZem6TMrWzpM69VVec6tku1ZjNl3qeS8le5QsiOe/RWQDZ1jAiLRFkWAMjKmqG997laxkG6omwLEW2Ngre4nH4LCdE1UefSqRe0r3r1KdfOW9L+ujp01z9MyxqfBb0q2QnKk5lrTR4MNF/a9iT2LeOFicMS7ihsZqg4Y0kthqgq4CjEYo34HIZPw/YXlx/K9i6oC8sPlexpIVYOTAnN7CZ6dNcGvLDbboVLD7Anjz+fSVulRSyQcdmj0uTDzsZup06d7FypsYjQuSLGaDhKmhDVs0hQzCi3BCMMdkW8a2IqoOK2HQQMUOgiKqCitgiUtiMjUMcpvwiTZ0l+kdRrlRdGjP0KnTYNueVrd+S4029x1KvNGRrovJNRRuZIWnSKbwLvbaKzeFVLS6RRSbRoQxqKuiUkkJzahRTSYrbRwzJkUEUM+pSvcRqNVzuUJ5HN87F5x0j8uocm0mKjN3bYCOWxpyQl3HPgtQye5nQlXkfGdeSLDaMcj4sbHJ7lCGUasnuLgXlk9yfie5TWTbk74nuVIFz4nuSplRZAvie4yPnO1RUc6YTyFecxyJaGj1bxZE02qZ7XpmtjqsSt3JLf3PnCyNO0zZ6P1B4cq+bz5J1n+22Nd9PQ/aLokeoYHmwJLUQVp/73sz55kTjJpppp00+Uz65p80NRhU4tNNceh4r7YdHcMv6fpoNqbrLGK4fh/eXi/wBHvPY8qnuFF7i97GRHWKxjZbxsqYy1jdGOjWYMdGRXiw0zMz1LY5zFWc5CMcp7FbLPZjJPYq5HyXkE5JbMoZm7LeR8lHM9zbMCLIsGyLL4BWQ2C2C2PiRNgNnNgjJxxxww4445DCUrDjGzoxtljHC2iNa4HYsd+C7DGkjsWOktixGNHPrYAoV4C7EMUQ1C2R+gR2ex3w/YtrH7BfCoc0FL4fsA8XsXXBLwLcUXKFGeH2EPHXg0pR24K+SBc0FWKpliHADjTCjwFI1BpgIJE0DOIOJpJsFskFkgLFyDYuQAp8jtGr1CEvktdOjedMN3maHq9EqgkbmiW9+xjaVVFfQ29Eqi37HmeP3ttCupOoM8X1Cf+UM9d1SWz38HidbO9TI9LJb+h77CUhCkT3GhHOfuC5+4ruIbAdG5e4LluC2Q2MjYSDfBXhKmOTtCCJBQTIYyKoQGkkS2RZ1iN2x1kWQnbSANDSLg08S2RQ0saSNHGuDl8lI+C2HQFwQ2KMTixhVtGnBUihp1bTNCOyOnwT+15BldRZidRnUXua+olSZ53qmSoP6G2hqvNa2ffne/BXsPI7m37ixRg6zrOYNjArRFgtnWAEtxuPkSmOxioWobINMVFug0yLFRj/6TLelXzJlWMbZewR7VZvq+mkHN3M0NMtkZ0V3ZDV00aSMNCL2FcF7EqRVwrgvY1sTFw2CHRiDjQ+KKhhUNgckLXA9R2ImtiwoPHyLljLjgLlEOBQyY/YpZ8S32NacCnmgt9g4mx5vW4OXRlNVKj0mrx2mYWaHble3JcrOzlHiWyLWNWJwrgt4oWKqFFD4RCxwQ5QRBwuqRU1rbxKC/0mXpRFrB8XOk+EIWu0mHs08VVN7sc4VyW3jUI/RCG7d+AIqUVGDbKGaajbY7WalRTVmFq9Xs9ysy0j9Rqkk0mZefVNtpO2Vs2pc20vxFJ3uzoz4+fSMcnJ22SgUg0i6SUiTkjiQlOg1IWTdCsCwphrJ7lVSJU/cXAuLJ7hKfuUlP3CWT3HwLqyE/F9ymshPex8C08linP3Fd4LmORJrkHhzOE1v5KzkD30x8OXj3/wBnOoJ/yeSVJqt/B6HLCOROE0mmqa9T5fodbLDNNNqj1+g+0EJQUMrTpbO90ZWcby9J6r9lIZJSzaOVNu3F8GVj+yutb+aUIr1pv9h7LDr8U0msir0boZPW4Irea+4f/pP7P8yvI4/stqYQlPNnhCMU3sm2zKS7ZNXdNo9T1frMJ6aeDE1vs3e55ZO3Znq9TqSHR4DQuLGIis02c2cQ2IwyezK2R8j5vZlXIzSEr5Hsyjle5cyspZHua5AbBbIbIbNAlshsizrGTjjjgJxxBww4OKsFK2PxxJ1eAeOBawwQqEdy5iVHNvRmwikhyQMEMey2Oe0OSoZBWxCdui1hjdB0H48dpByx7cBY0lQyk0KUKGSFCWi9kgV3Dc1miV2hM4F74doXPFsXKbNlHchbFrJioQ40y5SrkGgEEhEJEkIkRIbBYTBkIBYuXkNgSECnyXulxvLfuUXyafR43O/cny3mKHp9OqSNvSKsTZj6dbI2tOqwo8/wT/NvPrK6tKkzw2pneebu9z2nWJVGR4bLK8sn7npY+o1fYkwrFJhWakOzrAs6wCWyGyLIAkp0x8XsVw4ugHVlOw06QnGxjZJibIsGzmxATYWLeaQpsfpVc7FfhNbTqki/jRTwLZF3Gtji3TPih0VuhUUOgt0ZnF3TIucIRp1sPk6R2+L1lpFPVSqLPL9YyUmrPR62dJnkerT7p17jtRtkvdnUHRDQ5WRbQDVDWgXGxgo5DHAGhhyHwQpIdBcCB8V8oaREF8oaRNNnY4bj+I0RGG/AxQbKtaC08Lkma+COyKWmx00aWGPBjVSLeGOyLuNFbEtkXMaCKOgh8ULghyRpFJS2Ikg0iGi+AloVJFhoXNUgJUmtipmWzLmQp5mkmIqzNQtmYuqglK0bGpnVmRqJWxxnUYkti5hRTxuki1CdBR1ci6Di7YnHbLEFRnTGkmFjqDvyQuAGm37EkfKbl9CtnnUXQ6nVFbUNKDb9Bw3nOpal45NN7mFlzSyS3exc6tJzztv7jOR3ePMk6VGg0gEhkUXUmJBJAxTGJGdDqJo6jqJCGgWG0AxgLZHcQ2C2VIB9xKmJbOsrgWFMLvKykT3C4Sw5nd4juOUg4R7kDYCbOsD4YpNOx8NROL5KtnWLip6a2LqWWCqM3X1GvqOWapzb+8xosdBmdzF/qr3xpTe7Y2DKuN8FqBnU1YgGhUBqIAgWwvADewEXN7FbK6ssTezKuVlwlXIynke5ayPko5HuzfIQ2C2Q2dZoE2dZB1gSbOsg6wCbIs44AbjVsswRXxFvGtjLdM7GixAREfBHLoLMEMa2AxoelaOe32ZMY07LeHwKUNxuNUL9A9OhkWJQSdDlI1pMD4dsOO41R2LlLiu8VKxcoFxx2EziXNGo5IL0KuSFmhkgVskKNJQpuNEJUOlEW0WmoRJCJAkMFhMGRIAwJBsXLgQLZsdGjwZDNvo8dkzPz/6CfXo9OuDZxbYV9DI063RrrbEvocngn+TfP157rUqhN34PEt22/c9f12dYsjPHPk9DDLX0SYVgIlM1IVnWRZ1gE2cQShBxyOOAjYOhndsJiFYqY+47uAsFsAZ3e5d0St2ZydtI1tDDZbGe/UJq4Vsi5jWyK2FbItwVI4dfVmxQ/ErkhMUWcC3FPqov4VSDyOkzsapAZnSf0O7M5lpPjL106T+h5PXS78r9j0vUclRZ5fM+7K2Rb7ZaJSOcQ0vYLtDqSHAjsLHad2Ico4R2APGW+0FwK6SooUxsE0NcF6HKFB0hY+BqQEVQxLYQBHHvwMji9iysVPgNY/YVrYOHHTL+GO6E44UuC5ijwRVLGNFjGtxONFjGtxwz4IckLgNRpDEiGSuAWywGQqb2GNiZvYAr5XRl6rJ2+TQzPZmNrW5WkJFUs8+663M/MqtsvbRTvkoaqVscRQQnTLWFt1ZRxptpGnp4LZi18C5iWyvYcmvHgUltQSXhGVM2Lt0PhBPkRBNVsPi2lYjFKCrYzNfFrG6NFzpGfrZppplQPG9RVzZnpGt1OCttIy63O7Hwq5IbBAxQ2CHSEkGkckEkZ0nUdQaR1CBbQDQ1oXJBASxbGyQqRpAW2dZDYNlgdhWKsJPYANMNMUmFYgOyUwEyUwA7OsGzrEo2L3HwZWi9x+NkaC3iZbhwVMXJbhwYUHRGIVEZEkJsFsJgNgReR7FTK+SzkexUyvY0ySrlfJRyP5i5lezKU3uzfIDZKYIRYcSDZIiSdZBwBJy5IOTGD8bLeN7FPG9y1BmGzWYblnGivjexahwcuzPgOiJixqZz6M1Uw0hSYxMyoMT2OshMJLYcopmN7lqLTRUiqHQkawj6TQqcAlJnN2ioFecLEZMe3BdaFTiXKTMyQa3K8lTNHLDZlHLGm2aykUcccNKGDIJgsABgS4DYEgBT5PQ9JVQR59btHpOlKsaMf5H+p5+t3TLdGtN1if0MvSq5L6mnm2xP6HP/AB42jyfX51imeTo9P19/yTPNHfj4w19QSccaB1nHEpAHJBJWEo2EoiAEjqG9p3aLoLSJoOgWLoQwZBMFjCcS7ppe5vaKFJGNpI92VM39LCkjDzUT6vYlsizERjVFhI4v7WbHkt6dWypBbov6ZcF4napcSpFbUSpMsPZFHVSpM7L8XfjE6pkqLVmClbNPqeS20mZyXkyZOSColKjhhFHUvQlEpWOBCR3YGkEojSV2Hdg9Rvwd2ACVAlR2HKJPYPpLyx78BLHvwPUA+z2JroKhClwOxqmEobBY1vwSZsFSHY+RaVIbjRUCxENARDRpDFewDZLewDZQQ2IyPYZJiJvYCVc72ZlZt22aWZmZqHSYIrO1E0m9zPyO3bLedtypbgQ0s57tOg6z72l4cdtUjTw46XAen0lJNqh8oqKpIzt6qQtypUhmJb7im0nbZyzK6TFVLdpukMTVFWM/NhLJ7iA8rpGPq81yas0c2RdjdmDq8m7LzOhQ17tNmQ+WX9VktNFF8nZj4Qo8j4IRHwWIBojEg0gYoYuDOkijqJo5oABi5DWKkOGVIVLgbJCpI0gIkCMktwEi4bkiSUiaAkIJEIkQScjjgNNnWQdYA2DLGPwVYvcsY2Z6NdxFqD2KmFlqBhQemGmLXAaZKUtgNhN7APgIReR7Mp5mWp8Mq5eWXkKeZ7MpT/WLuXgpT5OjIQcccUBHAnAE2dZJwEiyTiEANgy1B2U4umPhIy1AvY5FuEtihjkWYT4OXcNci9h0WVYSsdFnPqGemGmJUg0zOwzkw1IQpEqZPAsqQcZleM78jYs0hLEZJoNbiIMamXIBNASQb4BZUBOSNooZocmjJFbNBMuEzWqZFDskab2FVRaaBgsYwGhkBi5DGLkABH9dfU9P0xVjR5nGryr6nqOnKsaOf+T/AKnn629IrmjQ1LrEylol86Leqf8AJMy/j/G0eM6+/lr3PP0b3XnbS9zDaO/Hxz36E4KiK3LDkhkINs6EW2PUe1IVoCopBUFRxKg0cSQ+BAL4BYTBYwBsBsJgPdpe4yX+nwtpvyb+njsjJ0GOkjawqkjj8t9nlZxrYfFCsa2HRRguGQVtGlp40ihhVtGliVRNfHPap9FkdIytbOk9zRzSpGJr8lRbOi1WmDrZ92Vq+BEeDsku7I2SuDNkJsghsixgcQ4oCIxABpBJEIJDJKRNI6MXJ0k2/YfDTTfNIVvPoISCUG+E39C9j0a2b3LUNPGPhEXy5g/KEhkYkJDYLc1boaqPAONbjci2oCC+Ymj+zq2Q3GhaXAyA4DYhIFMlM0hpbpCmw5vYRJ7FEichGR7ByexXyS2Aqr5pcmZqHyXc0+TN1E+UFZ0nFjUp21ZqYMKmltsZuGVNGrpsiSRPTzxdhpl2UlRQ1eGWO2tzSx5kluxOoaybeCbYvU/48xnzzc3FppEY8teTbyaTHkVNJlLN0vl4217BLGZCzbck/G9ypm0+owXcW0vKKstS1s7TLmen1e1GqqNWYupz23uTn1F3uZ2XJbe5rjBoyztiU9zm7ORvJwjI8jsfgTEdAnRHRGIXEYiCF4IaCOYjLaFtDWC0MENCpIfIVMuGrzW4NBTALNNHIk4CdRxxwBxx1ggE2dZ1kWMzIvcs4m7KsHuWMb3I0F3E7LUGVMT2LMHRhSWU9gkxUWGmRSE2C2SQwhFT4KmXllvI9inle5cCpmfJTlyy3mexUk92b5CDiCSg4444A4444CcECcAEmNgxSJTJoXMcixCZQhIfCZjrIaGOY+M7M+E6HQyHPrBrymGplRTvyGpmVyaz3E99CFM7vJ/IWYZNyxCdmcp0x+LJ7h+Q0oPYcinjndFqD2KkBiex0jkE0UCmhOSNostAuNrgcDLyw3ZXcaZo5se9pFOcdy00imA0NaBaBJTQpoe0LaH0Awr+VR6jQL5F9DzmCP8AKo9LoFUEc38m+lZ+tvRL50P1jrExOiVy+4brX/JE+D41nx4rrbvIl7mS1bNTrL/lkjMaO3HxhfoaJSJoOEVZZDxxpWyW9zm6VIgRus6yDhGmzmyCGwDmA2E2C2ABJg413ZEjpMbpI92W/cd9Qq2tFCkjWxrgoaWFJGjjR5+72qh8OBsULih0ERFRY06tmhHaKKmmjvZbk6R0eOemmVXUzpPc871TLUGrNnVzpPc8t1TMnLtvk0qN1TTt2GnSExmF3bC4gbZ1i3MhT3DhrEWMTK8XwPwwnkdQV+/gXwGJlzT6aWSnJUvQbpNEo03u/VmljxJKkjDfmk+HwjHp1FUlS9ixDGl4GdoSVI5dbtOQKSXgKjjrI6ZaQ2C3FxQ6CPXaIyIXFVIbMWv1iak5PgOIpPehqYRRiJsBM6zSBGSVIS5E5Zb8iZSGSJy2KuWdIPJLYqZph1FqvnnyZmebcqLOfJdpclKabdk9Z326OTtZbw6pKjOnsxfxHHyH5tOXj0C1ardhLWQe1o869Q0uQP0p3yyL46f6ephnT82PjJM8ti1ji07f4mhp+oJ0myLnWTlbbxQmqaRna3o+HOm1Gn6rYs4dXGSW9/eW4TUlswnksPkrwXUuj6nStygnkgvRboxJXbTTTXKZ9YnhhkTTS3PP9W+zuLUJzxrsyeq8nX4/PPlTZY8KwkWNboM+jyOOWDrw0tiukdMsvwS9MiPgIjyPgTQahiFRGIikYcQmSIwMFhMGQ4ZUhUxsnsJmXARPkgmXJBZuRxy4OsYcQ2dZAE446yANzZx1HDAo3ZYhyIjyPhyRoLmJ7FmDsrYixAwpHxYxMVEYuCKQmQ2TYDYEDI9inlfJayPYqZfJcCpmZUlyy1lZUlyzfIccQuCSjScQSInHHHAHHHHAQiUCggAk9xkZUJQaZFgWIz2GxnRUTGKXuZ3JLiyDFkKamGpmdwfVxZAu8prIEshP4NZ79xmPJvyUviDMUraFcBtaeV0X8b2MvTT2Ro45XRnzhrKCAjug0gDmjqJZFiCvlhaKWWFNmlJJoq5oJlSis6Spi2WMsaK7GktgsNgMZGYF/Ko9Jol8i+h5zT75Uel0a+RHJ/Jqs/WzoVu37E691jYWiSSf0A17+Rr2K8H+rX+niOsO9Ql7Gei91Z3qvuKK5OzPxzpSGLZAIKywJsghs6wNJxFnWHA6yGziLEHNgNhMBjh8BJl3p8Ladc7lCVtpGz07HSRHkvMp/tr6eNJF3GivhVJFqCPP0syCHwQmJYxq2giou6dUkxmZ0mdhVRFaiVJnVmcjSeoytdkpPc8dr83fqHvstj0nVMqhCTvwePyz7srfqypO1hu+z4TGqVopwk7GxlsPhQ5uzopykkk23wkO0miy6lppOMPV+fob2j6bDCk0rflvlmW/LnClDR9OnOnktL0NvT6WGNJJJV6IsY8SiuKQ1RXg4d+a6VwEIJDEiUqJRiYWqIbJkA7AOs6yDgCYjsa2ERH4+D2FomKvcZkYi9yaR6YaYlMYmEM2yG9gLIb2Lh0rLK2JlLYnJNJttlLPqVFNWPqLRZsqinuZubM5NpEZMzm+dhTaSJ91AWt22+RU2kHKaS5K2Sd8F5yVLyNFTK3ZZnwVchrIkptsG6CYLKNHe0xkNQ0+REgGxfmU2rh1rg1v+01tJ1JNpNnknNryHi1Ti1uY78HfhyvoWDVRmluWlUl6nitH1FxaVm/o+oRnSs5NZuGkq1rNBi1EGpQTT9UeR6p9nZ4W56ZNrntZ7jHkjNbNEzxRmmmrL8fmuU3EvuPlTjKEnGSaa5TQyPB7fqnQcOqTko9s/DXJ5LV6DPosjjli68SS2Z3Z8k1Ed59KQxC0HEZmJnMFBAAsCQbQLQAqXAiY+SEzLhkS5IJktwWizSCccMJsizrOqxkjkmiapHNgaGRZzIAGR5HwK8SxAnQXMXBZhwVsXBZgc9I2IcQIhoikmyGS2AwBeR7FTL5LWR2irkfJcJTyvkqvdss5Xsyq3udGTScccMOJIJAnHHHCDjkccBJRIKCQBJKIJQgJMJMFBIkhphJgRJFwC7iVIE4XAYpe47FLcrIfi5QrPQaumlSRpYpcGRhbVGhhnwcuopp43sh64KeGV0Wk9iA6TBOe7IewG5sVkVoNsFuxhSzQKc1TNLIrTKOaNFQldi2G9hcnsUk3Su8qPT6NfIjy+kd5V9T1Wj/VRxfylZbWjXyv6CeouoP6D9J+oyv1L9Rl+D/Vr/Tw/VHerZURa6k71ciomdufjnEjrIshsoCsiwWyLGY7OtAdyBlPakPhwbn4RHcKs67Hwzu4BsFM5uw4BY1eRfU3tDGkjE06udnoNEqijn83wue2liWyLMUJxLZD4rY4aocUWsCtorxRc00dx5+nF1KolHVzpMuzdRMnXTpP2OtpfUec63mrHJJ7s8wmzW63m7snZfkzdPhyajLHFii5SfhePcrPqdc190WHHPJNQxxcpPhI9D07o6VTzrul6eEXOl9LhpcabVzfLa5NiGNRWyOLzfyPfMrkKw6eMEkktvYsKCRKVMOtrOK21YFsSmQzooQGdwddENgEN7Arc5s5eBQOaBewbewtjCYofDZMTHkclsz2Vl5HyITtsbkfIlbszpHJhJi09iHkUfJUM263F5syhF7lXNqkk6exl6nWubcYsfUXR+p1e7SZQnNzdt7AW3u2C5pLZjk6gbkkhc8iS5FTyVshdtvcuQCc22CccaRNLnwVspanwVc3JQJYLCkBIZgkBINgSGZOR7CW6G5GKZcODhmcHyaWk1zi1uZDIU3F7Mnfjmob3Wg6ldJs39PqI5Et1Z820mrcWtz0vT9e9k2ed5PDc30vr10UpIr63puLVYnCcE016C9NqlNLfc0MWRNclYosl+vn/VOh5tFNzxJzxenlGUtj6rn08c0Gmk7PG9c6FLG3m08fdpeTqzv/AKys/P8A/HnkwgUmm00014YS3KHUNAtBsFgCpLYRNFpoRkWzKlCs1uC0Ma3BaNFFtUQE0C0UEUEtiDrGEtkMg4DcR5JIXIwZAsY1uivDks4zPQW8fBYgV8fA+BhSOiGgIhoik58AthPgBgCsnBUystZHsVMrNMhVy+Sq+WWMr5K7e5vPgcuCSESUHEkEiJxxxwg5nHWcBOCQISAJRKIRKEBIJcAoJCAkSRElCJJxxIg5IsYUJirZYxqqJ0FrHsXcMnaKWNXRZg6Oen1pYZU0yzGZnY8niyxCd+TI11NPc5ioT2Du0AC3uQ2c3YLZUCJK0VM0bRabE5FaGGbkVMRN0W80aso5tmaQqsaJ3mX1PV6T9VHkunu8q+p63R/qo4P5f08NvSKoMrdS/Uf0LWk/UKvUv1H9DTw/6tb8eF6l/ncirexY6k/8rmVLO3PxzisFyBbIbL4BWQ5ANguVIrgHKfoBYDluSmVxcHbZKBvYJAoSJIRIhxZ0kW5Weg0iqKMTRR3X1N/TRpI4/NSXsaHxQnGtixBHGDIIv6eNIp41ukX8bUIW2kl5ZeJ7VPqczpMw9fNtOi5repYcaaTTfueb1vUZZpOMHsb2jepPTK1GH42qdLvbdJHouk9OWnxptLve7aX7BHStFbWbIvmfFrhHoIQSVJHJ5vNefmIzP7DGFLgKqDpHVaOKrCgm6VA1RFN7jDuWGkAtgkxBL2AbsN8Cm9xBz3J4IXKOY58DnwA2E+AWOB//2Q==\"]}" http://localhost:8866/predict/ocr_system
\ No newline at end of file
diff --git a/ppocr/data/det/dataset_traversal.py b/ppocr/data/det/dataset_traversal.py
index c4ad5ced4f0240fa776afce8b7bf3e34310bd186..bd055c82be3bd5c762ccaf7a0134f89fbe4fe290 100644
--- a/ppocr/data/det/dataset_traversal.py
+++ b/ppocr/data/det/dataset_traversal.py
@@ -31,22 +31,27 @@ class TrainReader(object):
def __init__(self, params):
self.num_workers = params['num_workers']
self.label_file_path = params['label_file_path']
+ print(self.label_file_path)
+ self.use_mul_data = False
+ if isinstance(self.label_file_path, list):
+ self.use_mul_data = True
+ self.data_ratio_list = params['data_ratio_list']
self.batch_size = params['train_batch_size_per_card']
assert 'process_function' in params,\
"absence process_function in Reader"
self.process = create_module(params['process_function'])(params)
def __call__(self, process_id):
- with open(self.label_file_path, "rb") as fin:
- label_infor_list = fin.readlines()
- img_num = len(label_infor_list)
- img_id_list = list(range(img_num))
- if sys.platform == "win32" and self.num_workers != 1:
- print("multiprocess is not fully compatible with Windows."
- "num_workers will be 1.")
- self.num_workers = 1
def sample_iter_reader():
+ with open(self.label_file_path, "rb") as fin:
+ label_infor_list = fin.readlines()
+ img_num = len(label_infor_list)
+ img_id_list = list(range(img_num))
random.shuffle(img_id_list)
+ if sys.platform == "win32" and self.num_workers != 1:
+ print("multiprocess is not fully compatible with Windows."
+ "num_workers will be 1.")
+ self.num_workers = 1
for img_id in range(process_id, img_num, self.num_workers):
label_infor = label_infor_list[img_id_list[img_id]]
outs = self.process(label_infor)
@@ -54,13 +59,64 @@ class TrainReader(object):
continue
yield outs
+ def sample_iter_reader_mul():
+ batch_size = 1000
+ data_source_list = self.label_file_path
+ batch_size_list = list(map(int, [max(1.0, batch_size * x) for x in self.data_ratio_list]))
+ print(self.data_ratio_list, batch_size_list)
+
+ data_filename_list, data_size_list, fetch_record_list = [], [], []
+ for data_source in data_source_list:
+ image_files = open(data_source, "rb").readlines()
+ random.shuffle(image_files)
+ data_filename_list.append(image_files)
+ data_size_list.append(len(image_files))
+ fetch_record_list.append(0)
+
+ image_batch = []
+ # get a batch of img_fns and poly_fns
+ for i in range(0, len(batch_size_list)):
+ bs = batch_size_list[i]
+ ds = data_size_list[i]
+ image_names = data_filename_list[i]
+ fetch_record = fetch_record_list[i]
+ data_path = data_source_list[i]
+ for j in range(fetch_record, fetch_record + bs):
+ index = j % ds
+ image_batch.append(image_names[index])
+
+ if (fetch_record + bs) > ds:
+ fetch_record_list[i] = 0
+ random.shuffle(data_filename_list[i])
+ else:
+ fetch_record_list[i] = fetch_record + bs
+
+ if sys.platform == "win32":
+ print("multiprocess is not fully compatible with Windows."
+ "num_workers will be 1.")
+ self.num_workers = 1
+
+ for label_infor in image_batch:
+ outs = self.process(label_infor)
+ if outs is None:
+ continue
+ yield outs
+
def batch_iter_reader():
batch_outs = []
- for outs in sample_iter_reader():
- batch_outs.append(outs)
- if len(batch_outs) == self.batch_size:
- yield batch_outs
- batch_outs = []
+ if self.use_mul_data:
+ print("Sample date from multiple datasets!")
+ for outs in sample_iter_reader_mul():
+ batch_outs.append(outs)
+ if len(batch_outs) == self.batch_size:
+ yield batch_outs
+ batch_outs = []
+ else:
+ for outs in sample_iter_reader():
+ batch_outs.append(outs)
+ if len(batch_outs) == self.batch_size:
+ yield batch_outs
+ batch_outs = []
return batch_iter_reader
diff --git a/ppocr/data/det/east_process.py b/ppocr/data/det/east_process.py
index f9a0744005dd3d1e0f16c5bc0b8abd118536d280..e2581caa20bb0e63f67e110d483d3335f51d56b7 100755
--- a/ppocr/data/det/east_process.py
+++ b/ppocr/data/det/east_process.py
@@ -17,6 +17,7 @@ import cv2
import numpy as np
import json
import sys
+import os
class EASTProcessTrain(object):
def __init__(self, params):
@@ -52,7 +53,7 @@ class EASTProcessTrain(object):
label_infor = label_infor.decode()
label_infor = label_infor.encode('utf-8').decode('utf-8-sig')
substr = label_infor.strip("\n").split("\t")
- img_path = self.img_set_dir + substr[0]
+ img_path = os.path.join(self.img_set_dir, substr[0])
label = json.loads(substr[1])
nBox = len(label)
wordBBs, txts, txt_tags = [], [], []
diff --git a/ppocr/data/det/sast_process.py b/ppocr/data/det/sast_process.py
new file mode 100644
index 0000000000000000000000000000000000000000..74a848465f4cedb4a7007f61adc509d80885922c
--- /dev/null
+++ b/ppocr/data/det/sast_process.py
@@ -0,0 +1,781 @@
+#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+import math
+import cv2
+import numpy as np
+import json
+
+
+class SASTProcessTrain(object):
+ """
+ SAST process function for training
+ """
+ def __init__(self, params):
+ self.img_set_dir = params['img_set_dir']
+ self.min_crop_side_ratio = params['min_crop_side_ratio']
+ self.min_crop_size = params['min_crop_size']
+ image_shape = params['image_shape']
+ self.input_size = image_shape[1]
+ self.min_text_size = params['min_text_size']
+ self.max_text_size = params['max_text_size']
+
+ def convert_label_infor(self, label_infor):
+ label_infor = label_infor.decode()
+ label_infor = label_infor.encode('utf-8').decode('utf-8-sig')
+ substr = label_infor.strip("\n").split("\t")
+ img_path = self.img_set_dir + substr[0]
+ label = json.loads(substr[1])
+ nBox = len(label)
+ wordBBs, txts, txt_tags = [], [], []
+ for bno in range(0, nBox):
+ wordBB = label[bno]['points']
+ txt = label[bno]['transcription']
+ wordBBs.append(wordBB)
+ txts.append(txt)
+ if txt == '###':
+ txt_tags.append(True)
+ else:
+ txt_tags.append(False)
+ wordBBs = np.array(wordBBs, dtype=np.float32)
+ txt_tags = np.array(txt_tags, dtype=np.bool)
+ return img_path, wordBBs, txt_tags, txts
+
+ def quad_area(self, poly):
+ """
+ compute area of a polygon
+ :param poly:
+ :return:
+ """
+ edge = [
+ (poly[1][0] - poly[0][0]) * (poly[1][1] + poly[0][1]),
+ (poly[2][0] - poly[1][0]) * (poly[2][1] + poly[1][1]),
+ (poly[3][0] - poly[2][0]) * (poly[3][1] + poly[2][1]),
+ (poly[0][0] - poly[3][0]) * (poly[0][1] + poly[3][1])
+ ]
+ return np.sum(edge) / 2.
+
+ def gen_quad_from_poly(self, poly):
+ """
+ Generate min area quad from poly.
+ """
+ point_num = poly.shape[0]
+ min_area_quad = np.zeros((4, 2), dtype=np.float32)
+ if True:
+ rect = cv2.minAreaRect(poly.astype(np.int32)) # (center (x,y), (width, height), angle of rotation)
+ center_point = rect[0]
+ box = np.array(cv2.boxPoints(rect))
+
+ first_point_idx = 0
+ min_dist = 1e4
+ for i in range(4):
+ dist = np.linalg.norm(box[(i + 0) % 4] - poly[0]) + \
+ np.linalg.norm(box[(i + 1) % 4] - poly[point_num // 2 - 1]) + \
+ np.linalg.norm(box[(i + 2) % 4] - poly[point_num // 2]) + \
+ np.linalg.norm(box[(i + 3) % 4] - poly[-1])
+ if dist < min_dist:
+ min_dist = dist
+ first_point_idx = i
+ for i in range(4):
+ min_area_quad[i] = box[(first_point_idx + i) % 4]
+
+ return min_area_quad
+
+ def check_and_validate_polys(self, polys, tags, xxx_todo_changeme):
+ """
+ check so that the text poly is in the same direction,
+ and also filter some invalid polygons
+ :param polys:
+ :param tags:
+ :return:
+ """
+ (h, w) = xxx_todo_changeme
+ if polys.shape[0] == 0:
+ return polys, np.array([]), np.array([])
+ polys[:, :, 0] = np.clip(polys[:, :, 0], 0, w - 1)
+ polys[:, :, 1] = np.clip(polys[:, :, 1], 0, h - 1)
+
+ validated_polys = []
+ validated_tags = []
+ hv_tags = []
+ for poly, tag in zip(polys, tags):
+ quad = self.gen_quad_from_poly(poly)
+ p_area = self.quad_area(quad)
+ if abs(p_area) < 1:
+ print('invalid poly')
+ continue
+ if p_area > 0:
+ if tag == False:
+ print('poly in wrong direction')
+ tag = True # reversed cases should be ignore
+ poly = poly[(0, 15, 14, 13, 12, 11, 10, 9, 8, 7, 6, 5, 4, 3, 2, 1), :]
+ quad = quad[(0, 3, 2, 1), :]
+
+ len_w = np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[3] - quad[2])
+ len_h = np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2])
+ hv_tag = 1
+
+ if len_w * 2.0 < len_h:
+ hv_tag = 0
+
+ validated_polys.append(poly)
+ validated_tags.append(tag)
+ hv_tags.append(hv_tag)
+ return np.array(validated_polys), np.array(validated_tags), np.array(hv_tags)
+
+ def crop_area(self, im, polys, tags, hv_tags, txts, crop_background=False, max_tries=25):
+ """
+ make random crop from the input image
+ :param im:
+ :param polys:
+ :param tags:
+ :param crop_background:
+ :param max_tries: 50 -> 25
+ :return:
+ """
+ h, w, _ = im.shape
+ pad_h = h // 10
+ pad_w = w // 10
+ h_array = np.zeros((h + pad_h * 2), dtype=np.int32)
+ w_array = np.zeros((w + pad_w * 2), dtype=np.int32)
+ for poly in polys:
+ poly = np.round(poly, decimals=0).astype(np.int32)
+ minx = np.min(poly[:, 0])
+ maxx = np.max(poly[:, 0])
+ w_array[minx + pad_w: maxx + pad_w] = 1
+ miny = np.min(poly[:, 1])
+ maxy = np.max(poly[:, 1])
+ h_array[miny + pad_h: maxy + pad_h] = 1
+ # ensure the cropped area not across a text
+ h_axis = np.where(h_array == 0)[0]
+ w_axis = np.where(w_array == 0)[0]
+ if len(h_axis) == 0 or len(w_axis) == 0:
+ return im, polys, tags, hv_tags, txts
+ for i in range(max_tries):
+ xx = np.random.choice(w_axis, size=2)
+ xmin = np.min(xx) - pad_w
+ xmax = np.max(xx) - pad_w
+ xmin = np.clip(xmin, 0, w - 1)
+ xmax = np.clip(xmax, 0, w - 1)
+ yy = np.random.choice(h_axis, size=2)
+ ymin = np.min(yy) - pad_h
+ ymax = np.max(yy) - pad_h
+ ymin = np.clip(ymin, 0, h - 1)
+ ymax = np.clip(ymax, 0, h - 1)
+ # if xmax - xmin < ARGS.min_crop_side_ratio * w or \
+ # ymax - ymin < ARGS.min_crop_side_ratio * h:
+ if xmax - xmin < self.min_crop_size or \
+ ymax - ymin < self.min_crop_size:
+ # area too small
+ continue
+ if polys.shape[0] != 0:
+ poly_axis_in_area = (polys[:, :, 0] >= xmin) & (polys[:, :, 0] <= xmax) \
+ & (polys[:, :, 1] >= ymin) & (polys[:, :, 1] <= ymax)
+ selected_polys = np.where(np.sum(poly_axis_in_area, axis=1) == 4)[0]
+ else:
+ selected_polys = []
+ if len(selected_polys) == 0:
+ # no text in this area
+ if crop_background:
+ txts_tmp = []
+ for selected_poly in selected_polys:
+ txts_tmp.append(txts[selected_poly])
+ txts = txts_tmp
+ return im[ymin : ymax + 1, xmin : xmax + 1, :], \
+ polys[selected_polys], tags[selected_polys], hv_tags[selected_polys], txts
+ else:
+ continue
+ im = im[ymin: ymax + 1, xmin: xmax + 1, :]
+ polys = polys[selected_polys]
+ tags = tags[selected_polys]
+ hv_tags = hv_tags[selected_polys]
+ txts_tmp = []
+ for selected_poly in selected_polys:
+ txts_tmp.append(txts[selected_poly])
+ txts = txts_tmp
+ polys[:, :, 0] -= xmin
+ polys[:, :, 1] -= ymin
+ return im, polys, tags, hv_tags, txts
+
+ return im, polys, tags, hv_tags, txts
+
+ def generate_direction_map(self, poly_quads, direction_map):
+ """
+ """
+ width_list = []
+ height_list = []
+ for quad in poly_quads:
+ quad_w = (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3])) / 2.0
+ quad_h = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[2] - quad[1])) / 2.0
+ width_list.append(quad_w)
+ height_list.append(quad_h)
+ norm_width = max(sum(width_list) / (len(width_list) + 1e-6), 1.0)
+ average_height = max(sum(height_list) / (len(height_list) + 1e-6), 1.0)
+
+ for quad in poly_quads:
+ direct_vector_full = ((quad[1] + quad[2]) - (quad[0] + quad[3])) / 2.0
+ direct_vector = direct_vector_full / (np.linalg.norm(direct_vector_full) + 1e-6) * norm_width
+ direction_label = tuple(map(float, [direct_vector[0], direct_vector[1], 1.0 / (average_height + 1e-6)]))
+ cv2.fillPoly(direction_map, quad.round().astype(np.int32)[np.newaxis, :, :], direction_label)
+ return direction_map
+
+ def calculate_average_height(self, poly_quads):
+ """
+ """
+ height_list = []
+ for quad in poly_quads:
+ quad_h = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[2] - quad[1])) / 2.0
+ height_list.append(quad_h)
+ average_height = max(sum(height_list) / len(height_list), 1.0)
+ return average_height
+
+ def generate_tcl_label(self, hw, polys, tags, ds_ratio,
+ tcl_ratio=0.3, shrink_ratio_of_width=0.15):
+ """
+ Generate polygon.
+ """
+ h, w = hw
+ h, w = int(h * ds_ratio), int(w * ds_ratio)
+ polys = polys * ds_ratio
+
+ score_map = np.zeros((h, w,), dtype=np.float32)
+ tbo_map = np.zeros((h, w, 5), dtype=np.float32)
+ training_mask = np.ones((h, w,), dtype=np.float32)
+ direction_map = np.ones((h, w, 3)) * np.array([0, 0, 1]).reshape([1, 1, 3]).astype(np.float32)
+
+ for poly_idx, poly_tag in enumerate(zip(polys, tags)):
+ poly = poly_tag[0]
+ tag = poly_tag[1]
+
+ # generate min_area_quad
+ min_area_quad, center_point = self.gen_min_area_quad_from_poly(poly)
+ min_area_quad_h = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[3]) +
+ np.linalg.norm(min_area_quad[1] - min_area_quad[2]))
+ min_area_quad_w = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[1]) +
+ np.linalg.norm(min_area_quad[2] - min_area_quad[3]))
+
+ if min(min_area_quad_h, min_area_quad_w) < self.min_text_size * ds_ratio \
+ or min(min_area_quad_h, min_area_quad_w) > self.max_text_size * ds_ratio:
+ continue
+
+ if tag:
+ # continue
+ cv2.fillPoly(training_mask, poly.astype(np.int32)[np.newaxis, :, :], 0.15)
+ else:
+ tcl_poly = self.poly2tcl(poly, tcl_ratio)
+ tcl_quads = self.poly2quads(tcl_poly)
+ poly_quads = self.poly2quads(poly)
+ # stcl map
+ stcl_quads, quad_index = self.shrink_poly_along_width(tcl_quads, shrink_ratio_of_width=shrink_ratio_of_width,
+ expand_height_ratio=1.0 / tcl_ratio)
+ # generate tcl map
+ cv2.fillPoly(score_map, np.round(stcl_quads).astype(np.int32), 1.0)
+
+ # generate tbo map
+ for idx, quad in enumerate(stcl_quads):
+ quad_mask = np.zeros((h, w), dtype=np.float32)
+ quad_mask = cv2.fillPoly(quad_mask, np.round(quad[np.newaxis, :, :]).astype(np.int32), 1.0)
+ tbo_map = self.gen_quad_tbo(poly_quads[quad_index[idx]], quad_mask, tbo_map)
+ return score_map, tbo_map, training_mask
+
+ def generate_tvo_and_tco(self, hw, polys, tags, tcl_ratio=0.3, ds_ratio=0.25):
+ """
+ Generate tcl map, tvo map and tbo map.
+ """
+ h, w = hw
+ h, w = int(h * ds_ratio), int(w * ds_ratio)
+ polys = polys * ds_ratio
+ poly_mask = np.zeros((h, w), dtype=np.float32)
+
+ tvo_map = np.ones((9, h, w), dtype=np.float32)
+ tvo_map[0:-1:2] = np.tile(np.arange(0, w), (h, 1))
+ tvo_map[1:-1:2] = np.tile(np.arange(0, w), (h, 1)).T
+ poly_tv_xy_map = np.zeros((8, h, w), dtype=np.float32)
+
+ # tco map
+ tco_map = np.ones((3, h, w), dtype=np.float32)
+ tco_map[0] = np.tile(np.arange(0, w), (h, 1))
+ tco_map[1] = np.tile(np.arange(0, w), (h, 1)).T
+ poly_tc_xy_map = np.zeros((2, h, w), dtype=np.float32)
+
+ poly_short_edge_map = np.ones((h, w), dtype=np.float32)
+
+ for poly, poly_tag in zip(polys, tags):
+
+ if poly_tag == True:
+ continue
+
+ # adjust point order for vertical poly
+ poly = self.adjust_point(poly)
+
+ # generate min_area_quad
+ min_area_quad, center_point = self.gen_min_area_quad_from_poly(poly)
+ min_area_quad_h = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[3]) +
+ np.linalg.norm(min_area_quad[1] - min_area_quad[2]))
+ min_area_quad_w = 0.5 * (np.linalg.norm(min_area_quad[0] - min_area_quad[1]) +
+ np.linalg.norm(min_area_quad[2] - min_area_quad[3]))
+
+ # generate tcl map and text, 128 * 128
+ tcl_poly = self.poly2tcl(poly, tcl_ratio)
+
+ # generate poly_tv_xy_map
+ for idx in range(4):
+ cv2.fillPoly(poly_tv_xy_map[2 * idx],
+ np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32),
+ float(min(max(min_area_quad[idx, 0], 0), w)))
+ cv2.fillPoly(poly_tv_xy_map[2 * idx + 1],
+ np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32),
+ float(min(max(min_area_quad[idx, 1], 0), h)))
+
+ # generate poly_tc_xy_map
+ for idx in range(2):
+ cv2.fillPoly(poly_tc_xy_map[idx],
+ np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), float(center_point[idx]))
+
+ # generate poly_short_edge_map
+ cv2.fillPoly(poly_short_edge_map,
+ np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32),
+ float(max(min(min_area_quad_h, min_area_quad_w), 1.0)))
+
+ # generate poly_mask and training_mask
+ cv2.fillPoly(poly_mask, np.round(tcl_poly[np.newaxis, :, :]).astype(np.int32), 1)
+
+ tvo_map *= poly_mask
+ tvo_map[:8] -= poly_tv_xy_map
+ tvo_map[-1] /= poly_short_edge_map
+ tvo_map = tvo_map.transpose((1, 2, 0))
+
+ tco_map *= poly_mask
+ tco_map[:2] -= poly_tc_xy_map
+ tco_map[-1] /= poly_short_edge_map
+ tco_map = tco_map.transpose((1, 2, 0))
+
+ return tvo_map, tco_map
+
+ def adjust_point(self, poly):
+ """
+ adjust point order.
+ """
+ point_num = poly.shape[0]
+ if point_num == 4:
+ len_1 = np.linalg.norm(poly[0] - poly[1])
+ len_2 = np.linalg.norm(poly[1] - poly[2])
+ len_3 = np.linalg.norm(poly[2] - poly[3])
+ len_4 = np.linalg.norm(poly[3] - poly[0])
+
+ if (len_1 + len_3) * 1.5 < (len_2 + len_4):
+ poly = poly[[1, 2, 3, 0], :]
+
+ elif point_num > 4:
+ vector_1 = poly[0] - poly[1]
+ vector_2 = poly[1] - poly[2]
+ cos_theta = np.dot(vector_1, vector_2) / (np.linalg.norm(vector_1) * np.linalg.norm(vector_2) + 1e-6)
+ theta = np.arccos(np.round(cos_theta, decimals=4))
+
+ if abs(theta) > (70 / 180 * math.pi):
+ index = list(range(1, point_num)) + [0]
+ poly = poly[np.array(index), :]
+ return poly
+
+ def gen_min_area_quad_from_poly(self, poly):
+ """
+ Generate min area quad from poly.
+ """
+ point_num = poly.shape[0]
+ min_area_quad = np.zeros((4, 2), dtype=np.float32)
+ if point_num == 4:
+ min_area_quad = poly
+ center_point = np.sum(poly, axis=0) / 4
+ else:
+ rect = cv2.minAreaRect(poly.astype(np.int32)) # (center (x,y), (width, height), angle of rotation)
+ center_point = rect[0]
+ box = np.array(cv2.boxPoints(rect))
+
+ first_point_idx = 0
+ min_dist = 1e4
+ for i in range(4):
+ dist = np.linalg.norm(box[(i + 0) % 4] - poly[0]) + \
+ np.linalg.norm(box[(i + 1) % 4] - poly[point_num // 2 - 1]) + \
+ np.linalg.norm(box[(i + 2) % 4] - poly[point_num // 2]) + \
+ np.linalg.norm(box[(i + 3) % 4] - poly[-1])
+ if dist < min_dist:
+ min_dist = dist
+ first_point_idx = i
+
+ for i in range(4):
+ min_area_quad[i] = box[(first_point_idx + i) % 4]
+
+ return min_area_quad, center_point
+
+ def shrink_quad_along_width(self, quad, begin_width_ratio=0., end_width_ratio=1.):
+ """
+ Generate shrink_quad_along_width.
+ """
+ ratio_pair = np.array([[begin_width_ratio], [end_width_ratio]], dtype=np.float32)
+ p0_1 = quad[0] + (quad[1] - quad[0]) * ratio_pair
+ p3_2 = quad[3] + (quad[2] - quad[3]) * ratio_pair
+ return np.array([p0_1[0], p0_1[1], p3_2[1], p3_2[0]])
+
+ def shrink_poly_along_width(self, quads, shrink_ratio_of_width, expand_height_ratio=1.0):
+ """
+ shrink poly with given length.
+ """
+ upper_edge_list = []
+
+ def get_cut_info(edge_len_list, cut_len):
+ for idx, edge_len in enumerate(edge_len_list):
+ cut_len -= edge_len
+ if cut_len <= 0.000001:
+ ratio = (cut_len + edge_len_list[idx]) / edge_len_list[idx]
+ return idx, ratio
+
+ for quad in quads:
+ upper_edge_len = np.linalg.norm(quad[0] - quad[1])
+ upper_edge_list.append(upper_edge_len)
+
+ # length of left edge and right edge.
+ left_length = np.linalg.norm(quads[0][0] - quads[0][3]) * expand_height_ratio
+ right_length = np.linalg.norm(quads[-1][1] - quads[-1][2]) * expand_height_ratio
+
+ shrink_length = min(left_length, right_length, sum(upper_edge_list)) * shrink_ratio_of_width
+ # shrinking length
+ upper_len_left = shrink_length
+ upper_len_right = sum(upper_edge_list) - shrink_length
+
+ left_idx, left_ratio = get_cut_info(upper_edge_list, upper_len_left)
+ left_quad = self.shrink_quad_along_width(quads[left_idx], begin_width_ratio=left_ratio, end_width_ratio=1)
+ right_idx, right_ratio = get_cut_info(upper_edge_list, upper_len_right)
+ right_quad = self.shrink_quad_along_width(quads[right_idx], begin_width_ratio=0, end_width_ratio=right_ratio)
+
+ out_quad_list = []
+ if left_idx == right_idx:
+ out_quad_list.append([left_quad[0], right_quad[1], right_quad[2], left_quad[3]])
+ else:
+ out_quad_list.append(left_quad)
+ for idx in range(left_idx + 1, right_idx):
+ out_quad_list.append(quads[idx])
+ out_quad_list.append(right_quad)
+
+ return np.array(out_quad_list), list(range(left_idx, right_idx + 1))
+
+ def vector_angle(self, A, B):
+ """
+ Calculate the angle between vector AB and x-axis positive direction.
+ """
+ AB = np.array([B[1] - A[1], B[0] - A[0]])
+ return np.arctan2(*AB)
+
+ def theta_line_cross_point(self, theta, point):
+ """
+ Calculate the line through given point and angle in ax + by + c =0 form.
+ """
+ x, y = point
+ cos = np.cos(theta)
+ sin = np.sin(theta)
+ return [sin, -cos, cos * y - sin * x]
+
+ def line_cross_two_point(self, A, B):
+ """
+ Calculate the line through given point A and B in ax + by + c =0 form.
+ """
+ angle = self.vector_angle(A, B)
+ return self.theta_line_cross_point(angle, A)
+
+ def average_angle(self, poly):
+ """
+ Calculate the average angle between left and right edge in given poly.
+ """
+ p0, p1, p2, p3 = poly
+ angle30 = self.vector_angle(p3, p0)
+ angle21 = self.vector_angle(p2, p1)
+ return (angle30 + angle21) / 2
+
+ def line_cross_point(self, line1, line2):
+ """
+ line1 and line2 in 0=ax+by+c form, compute the cross point of line1 and line2
+ """
+ a1, b1, c1 = line1
+ a2, b2, c2 = line2
+ d = a1 * b2 - a2 * b1
+
+ if d == 0:
+ #print("line1", line1)
+ #print("line2", line2)
+ print('Cross point does not exist')
+ return np.array([0, 0], dtype=np.float32)
+ else:
+ x = (b1 * c2 - b2 * c1) / d
+ y = (a2 * c1 - a1 * c2) / d
+
+ return np.array([x, y], dtype=np.float32)
+
+ def quad2tcl(self, poly, ratio):
+ """
+ Generate center line by poly clock-wise point. (4, 2)
+ """
+ ratio_pair = np.array([[0.5 - ratio / 2], [0.5 + ratio / 2]], dtype=np.float32)
+ p0_3 = poly[0] + (poly[3] - poly[0]) * ratio_pair
+ p1_2 = poly[1] + (poly[2] - poly[1]) * ratio_pair
+ return np.array([p0_3[0], p1_2[0], p1_2[1], p0_3[1]])
+
+ def poly2tcl(self, poly, ratio):
+ """
+ Generate center line by poly clock-wise point.
+ """
+ ratio_pair = np.array([[0.5 - ratio / 2], [0.5 + ratio / 2]], dtype=np.float32)
+ tcl_poly = np.zeros_like(poly)
+ point_num = poly.shape[0]
+
+ for idx in range(point_num // 2):
+ point_pair = poly[idx] + (poly[point_num - 1 - idx] - poly[idx]) * ratio_pair
+ tcl_poly[idx] = point_pair[0]
+ tcl_poly[point_num - 1 - idx] = point_pair[1]
+ return tcl_poly
+
+ def gen_quad_tbo(self, quad, tcl_mask, tbo_map):
+ """
+ Generate tbo_map for give quad.
+ """
+ # upper and lower line function: ax + by + c = 0;
+ up_line = self.line_cross_two_point(quad[0], quad[1])
+ lower_line = self.line_cross_two_point(quad[3], quad[2])
+
+ quad_h = 0.5 * (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2]))
+ quad_w = 0.5 * (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3]))
+
+ # average angle of left and right line.
+ angle = self.average_angle(quad)
+
+ xy_in_poly = np.argwhere(tcl_mask == 1)
+ for y, x in xy_in_poly:
+ point = (x, y)
+ line = self.theta_line_cross_point(angle, point)
+ cross_point_upper = self.line_cross_point(up_line, line)
+ cross_point_lower = self.line_cross_point(lower_line, line)
+ ##FIX, offset reverse
+ upper_offset_x, upper_offset_y = cross_point_upper - point
+ lower_offset_x, lower_offset_y = cross_point_lower - point
+ tbo_map[y, x, 0] = upper_offset_y
+ tbo_map[y, x, 1] = upper_offset_x
+ tbo_map[y, x, 2] = lower_offset_y
+ tbo_map[y, x, 3] = lower_offset_x
+ tbo_map[y, x, 4] = 1.0 / max(min(quad_h, quad_w), 1.0) * 2
+ return tbo_map
+
+ def poly2quads(self, poly):
+ """
+ Split poly into quads.
+ """
+ quad_list = []
+ point_num = poly.shape[0]
+
+ # point pair
+ point_pair_list = []
+ for idx in range(point_num // 2):
+ point_pair = [poly[idx], poly[point_num - 1 - idx]]
+ point_pair_list.append(point_pair)
+
+ quad_num = point_num // 2 - 1
+ for idx in range(quad_num):
+ # reshape and adjust to clock-wise
+ quad_list.append((np.array(point_pair_list)[[idx, idx + 1]]).reshape(4, 2)[[0, 2, 3, 1]])
+
+ return np.array(quad_list)
+
+ def extract_polys(self, poly_txt_path):
+ """
+ Read text_polys, txt_tags, txts from give txt file.
+ """
+ text_polys, txt_tags, txts = [], [], []
+
+ with open(poly_txt_path) as f:
+ for line in f.readlines():
+ poly_str, txt = line.strip().split('\t')
+ poly = map(float, poly_str.split(','))
+ text_polys.append(np.array(poly, dtype=np.float32).reshape(-1, 2))
+ txts.append(txt)
+ if txt == '###':
+ txt_tags.append(True)
+ else:
+ txt_tags.append(False)
+
+ return np.array(map(np.array, text_polys)), \
+ np.array(txt_tags, dtype=np.bool), txts
+
+ def __call__(self, label_infor):
+ infor = self.convert_label_infor(label_infor)
+ im_path, text_polys, text_tags, text_strs = infor
+ im = cv2.imread(im_path)
+ if im is None:
+ return None
+ if text_polys.shape[0] == 0:
+ return None
+
+ h, w, _ = im.shape
+ text_polys, text_tags, hv_tags = self.check_and_validate_polys(text_polys, text_tags, (h, w))
+
+ if text_polys.shape[0] == 0:
+ return None
+
+ #set aspect ratio and keep area fix
+ asp_scales = np.arange(1.0, 1.55, 0.1)
+ asp_scale = np.random.choice(asp_scales)
+
+ if np.random.rand() < 0.5:
+ asp_scale = 1.0 / asp_scale
+ asp_scale = math.sqrt(asp_scale)
+
+ asp_wx = asp_scale
+ asp_hy = 1.0 / asp_scale
+ im = cv2.resize(im, dsize=None, fx=asp_wx, fy=asp_hy)
+ text_polys[:, :, 0] *= asp_wx
+ text_polys[:, :, 1] *= asp_hy
+
+ h, w, _ = im.shape
+ if max(h, w) > 2048:
+ rd_scale = 2048.0 / max(h, w)
+ im = cv2.resize(im, dsize=None, fx=rd_scale, fy=rd_scale)
+ text_polys *= rd_scale
+ h, w, _ = im.shape
+ if min(h, w) < 16:
+ return None
+
+ #no background
+ im, text_polys, text_tags, hv_tags, text_strs = self.crop_area(im, \
+ text_polys, text_tags, hv_tags, text_strs, crop_background=False)
+ if text_polys.shape[0] == 0:
+ return None
+ #continue for all ignore case
+ if np.sum((text_tags * 1.0)) >= text_tags.size:
+ return None
+ new_h, new_w, _ = im.shape
+ if (new_h is None) or (new_w is None):
+ return None
+ #resize image
+ std_ratio = float(self.input_size) / max(new_w, new_h)
+ rand_scales = np.array([0.25, 0.375, 0.5, 0.625, 0.75, 0.875, 1.0, 1.0, 1.0, 1.0, 1.0])
+ rz_scale = std_ratio * np.random.choice(rand_scales)
+ im = cv2.resize(im, dsize=None, fx=rz_scale, fy=rz_scale)
+ text_polys[:, :, 0] *= rz_scale
+ text_polys[:, :, 1] *= rz_scale
+
+ #add gaussian blur
+ if np.random.rand() < 0.1 * 0.5:
+ ks = np.random.permutation(5)[0] + 1
+ ks = int(ks/2)*2 + 1
+ im = cv2.GaussianBlur(im, ksize=(ks, ks), sigmaX=0, sigmaY=0)
+ #add brighter
+ if np.random.rand() < 0.1 * 0.5:
+ im = im * (1.0 + np.random.rand() * 0.5)
+ im = np.clip(im, 0.0, 255.0)
+ #add darker
+ if np.random.rand() < 0.1 * 0.5:
+ im = im * (1.0 - np.random.rand() * 0.5)
+ im = np.clip(im, 0.0, 255.0)
+
+ # Padding the im to [input_size, input_size]
+ new_h, new_w, _ = im.shape
+ if min(new_w, new_h) < self.input_size * 0.5:
+ return None
+
+ im_padded = np.ones((self.input_size, self.input_size, 3), dtype=np.float32)
+ im_padded[:, :, 2] = 0.485 * 255
+ im_padded[:, :, 1] = 0.456 * 255
+ im_padded[:, :, 0] = 0.406 * 255
+
+ # Random the start position
+ del_h = self.input_size - new_h
+ del_w = self.input_size - new_w
+ sh, sw = 0, 0
+ if del_h > 1:
+ sh = int(np.random.rand() * del_h)
+ if del_w > 1:
+ sw = int(np.random.rand() * del_w)
+
+ # Padding
+ im_padded[sh: sh + new_h, sw: sw + new_w, :] = im.copy()
+ text_polys[:, :, 0] += sw
+ text_polys[:, :, 1] += sh
+
+ score_map, border_map, training_mask = self.generate_tcl_label((self.input_size, self.input_size),
+ text_polys, text_tags, 0.25)
+
+ # SAST head
+ tvo_map, tco_map = self.generate_tvo_and_tco((self.input_size, self.input_size), text_polys, text_tags, tcl_ratio=0.3, ds_ratio=0.25)
+ # print("test--------tvo_map shape:", tvo_map.shape)
+
+ im_padded[:, :, 2] -= 0.485 * 255
+ im_padded[:, :, 1] -= 0.456 * 255
+ im_padded[:, :, 0] -= 0.406 * 255
+ im_padded[:, :, 2] /= (255.0 * 0.229)
+ im_padded[:, :, 1] /= (255.0 * 0.224)
+ im_padded[:, :, 0] /= (255.0 * 0.225)
+ im_padded = im_padded.transpose((2, 0, 1))
+
+ return im_padded[::-1, :, :], score_map[np.newaxis, :, :], border_map.transpose((2, 0, 1)), training_mask[np.newaxis, :, :], tvo_map.transpose((2, 0, 1)), tco_map.transpose((2, 0, 1))
+
+
+class SASTProcessTest(object):
+ """
+ SAST process function for test
+ """
+ def __init__(self, params):
+ super(SASTProcessTest, self).__init__()
+ if 'max_side_len' in params:
+ self.max_side_len = params['max_side_len']
+ else:
+ self.max_side_len = 2400
+
+ def resize_image(self, im):
+ """
+ resize image to a size multiple of max_stride which is required by the network
+ :param im: the resized image
+ :param max_side_len: limit of max image size to avoid out of memory in gpu
+ :return: the resized image and the resize ratio
+ """
+ h, w, _ = im.shape
+
+ resize_w = w
+ resize_h = h
+
+ # Fix the longer side
+ if resize_h > resize_w:
+ ratio = float(self.max_side_len) / resize_h
+ else:
+ ratio = float(self.max_side_len) / resize_w
+
+ resize_h = int(resize_h * ratio)
+ resize_w = int(resize_w * ratio)
+
+ max_stride = 128
+ resize_h = (resize_h + max_stride - 1) // max_stride * max_stride
+ resize_w = (resize_w + max_stride - 1) // max_stride * max_stride
+ im = cv2.resize(im, (int(resize_w), int(resize_h)))
+ ratio_h = resize_h / float(h)
+ ratio_w = resize_w / float(w)
+
+ return im, (ratio_h, ratio_w)
+
+ def __call__(self, im):
+ src_h, src_w, _ = im.shape
+ im, (ratio_h, ratio_w) = self.resize_image(im)
+ img_mean = [0.485, 0.456, 0.406]
+ img_std = [0.229, 0.224, 0.225]
+ im = im[:, :, ::-1].astype(np.float32)
+ im = im / 255
+ im -= img_mean
+ im /= img_std
+ im = im.transpose((2, 0, 1))
+ im = im[np.newaxis, :]
+ return [im, (ratio_h, ratio_w, src_h, src_w)]
diff --git a/ppocr/data/rec/dataset_traversal.py b/ppocr/data/rec/dataset_traversal.py
index 1e9dbb9c1db64ea4e572e321de3d6d624b29a3a9..ebee624ab74b2390323ab538627f459cb2353e8b 100755
--- a/ppocr/data/rec/dataset_traversal.py
+++ b/ppocr/data/rec/dataset_traversal.py
@@ -26,7 +26,7 @@ from ppocr.utils.utility import initial_logger
from ppocr.utils.utility import get_image_file_list
logger = initial_logger()
-from .img_tools import process_image, get_img_data
+from .img_tools import process_image, process_image_srn, get_img_data
class LMDBReader(object):
@@ -43,6 +43,9 @@ class LMDBReader(object):
self.mode = params['mode']
self.drop_last = False
self.use_tps = False
+ self.num_heads = None
+ if "num_heads" in params:
+ self.num_heads = params['num_heads']
if "tps" in params:
self.ues_tps = True
self.use_distort = False
@@ -119,12 +122,19 @@ class LMDBReader(object):
img = cv2.imread(single_img)
if img.shape[-1] == 1 or len(list(img.shape)) == 2:
img = cv2.cvtColor(img, cv2.COLOR_GRAY2BGR)
- norm_img = process_image(
- img=img,
- image_shape=self.image_shape,
- char_ops=self.char_ops,
- tps=self.use_tps,
- infer_mode=True)
+ if self.loss_type == 'srn':
+ norm_img = process_image_srn(
+ img=img,
+ image_shape=self.image_shape,
+ num_heads=self.num_heads,
+ max_text_length=self.max_text_length)
+ else:
+ norm_img = process_image(
+ img=img,
+ image_shape=self.image_shape,
+ char_ops=self.char_ops,
+ tps=self.use_tps,
+ infer_mode=True)
yield norm_img
else:
lmdb_sets = self.load_hierarchical_lmdb_dataset()
@@ -144,14 +154,25 @@ class LMDBReader(object):
if sample_info is None:
continue
img, label = sample_info
- outs = process_image(
- img=img,
- image_shape=self.image_shape,
- label=label,
- char_ops=self.char_ops,
- loss_type=self.loss_type,
- max_text_length=self.max_text_length,
- distort=self.use_distort)
+ outs = []
+ if self.loss_type == "srn":
+ outs = process_image_srn(
+ img=img,
+ image_shape=self.image_shape,
+ num_heads=self.num_heads,
+ max_text_length=self.max_text_length,
+ label=label,
+ char_ops=self.char_ops,
+ loss_type=self.loss_type)
+
+ else:
+ outs = process_image(
+ img=img,
+ image_shape=self.image_shape,
+ label=label,
+ char_ops=self.char_ops,
+ loss_type=self.loss_type,
+ max_text_length=self.max_text_length)
if outs is None:
continue
yield outs
@@ -185,6 +206,7 @@ class SimpleReader(object):
if params['mode'] != 'test':
self.img_set_dir = params['img_set_dir']
self.label_file_path = params['label_file_path']
+ self.use_gpu = params['use_gpu']
self.char_ops = params['char_ops']
self.image_shape = params['image_shape']
self.loss_type = params['loss_type']
@@ -213,6 +235,15 @@ class SimpleReader(object):
if self.mode != 'train':
process_id = 0
+ def get_device_num():
+ if self.use_gpu:
+ gpus = os.environ.get("CUDA_VISIBLE_DEVICES", '1')
+ gpu_num = len(gpus.split(','))
+ return gpu_num
+ else:
+ cpu_num = os.environ.get("CPU_NUM", 1)
+ return int(cpu_num)
+
def sample_iter_reader():
if self.mode != 'train' and self.infer_img is not None:
image_file_list = get_image_file_list(self.infer_img)
@@ -237,6 +268,12 @@ class SimpleReader(object):
print("multiprocess is not fully compatible with Windows."
"num_workers will be 1.")
self.num_workers = 1
+ if self.batch_size * get_device_num(
+ ) * self.num_workers > img_num:
+ raise Exception(
+ "The number of the whole data ({}) is smaller than the batch_size * devices_num * num_workers ({})".
+ format(img_num, self.batch_size * get_device_num() *
+ self.num_workers))
for img_id in range(process_id, img_num, self.num_workers):
label_infor = label_infor_list[img_id_list[img_id]]
substr = label_infor.decode('utf-8').strip("\n").split("\t")
diff --git a/ppocr/data/rec/img_tools.py b/ppocr/data/rec/img_tools.py
index d41abd9ba5c867853984b4b3b4c9eda41ebeff7c..527e0266ee33ac81e29b5610ed05f401860078a4 100755
--- a/ppocr/data/rec/img_tools.py
+++ b/ppocr/data/rec/img_tools.py
@@ -360,7 +360,7 @@ def process_image(img,
text = char_ops.encode(label)
if len(text) == 0 or len(text) > max_text_length:
logger.info(
- "Warning in ppocr/data/rec/img_tools.py:line362: Wrong data type."
+ "Warning in ppocr/data/rec/img_tools.py: Wrong data type."
"Excepted string with length between 1 and {}, but "
"got '{}'. Label is '{}'".format(max_text_length,
len(text), label))
@@ -381,3 +381,84 @@ def process_image(img,
assert False, "Unsupport loss_type %s in process_image"\
% loss_type
return (norm_img)
+
+def resize_norm_img_srn(img, image_shape):
+ imgC, imgH, imgW = image_shape
+
+ img_black = np.zeros((imgH, imgW))
+ im_hei = img.shape[0]
+ im_wid = img.shape[1]
+
+ if im_wid <= im_hei * 1:
+ img_new = cv2.resize(img, (imgH * 1, imgH))
+ elif im_wid <= im_hei * 2:
+ img_new = cv2.resize(img, (imgH * 2, imgH))
+ elif im_wid <= im_hei * 3:
+ img_new = cv2.resize(img, (imgH * 3, imgH))
+ else:
+ img_new = cv2.resize(img, (imgW, imgH))
+
+ img_np = np.asarray(img_new)
+ img_np = cv2.cvtColor(img_np, cv2.COLOR_BGR2GRAY)
+ img_black[:, 0:img_np.shape[1]] = img_np
+ img_black = img_black[:, :, np.newaxis]
+
+ row, col, c = img_black.shape
+ c = 1
+
+ return np.reshape(img_black, (c, row, col)).astype(np.float32)
+
+def srn_other_inputs(image_shape,
+ num_heads,
+ max_text_length):
+
+ imgC, imgH, imgW = image_shape
+ feature_dim = int((imgH / 8) * (imgW / 8))
+
+ encoder_word_pos = np.array(range(0, feature_dim)).reshape((feature_dim, 1)).astype('int64')
+ gsrm_word_pos = np.array(range(0, max_text_length)).reshape((max_text_length, 1)).astype('int64')
+
+ lbl_weight = np.array([37] * max_text_length).reshape((-1,1)).astype('int64')
+
+ gsrm_attn_bias_data = np.ones((1, max_text_length, max_text_length))
+ gsrm_slf_attn_bias1 = np.triu(gsrm_attn_bias_data, 1).reshape([-1, 1, max_text_length, max_text_length])
+ gsrm_slf_attn_bias1 = np.tile(gsrm_slf_attn_bias1, [1, num_heads, 1, 1]) * [-1e9]
+
+ gsrm_slf_attn_bias2 = np.tril(gsrm_attn_bias_data, -1).reshape([-1, 1, max_text_length, max_text_length])
+ gsrm_slf_attn_bias2 = np.tile(gsrm_slf_attn_bias2, [1, num_heads, 1, 1]) * [-1e9]
+
+ encoder_word_pos = encoder_word_pos[np.newaxis, :]
+ gsrm_word_pos = gsrm_word_pos[np.newaxis, :]
+
+ return [lbl_weight, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2]
+
+def process_image_srn(img,
+ image_shape,
+ num_heads,
+ max_text_length,
+ label=None,
+ char_ops=None,
+ loss_type=None):
+ norm_img = resize_norm_img_srn(img, image_shape)
+ norm_img = norm_img[np.newaxis, :]
+ [lbl_weight, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2] = \
+ srn_other_inputs(image_shape, num_heads, max_text_length)
+
+ if label is not None:
+ char_num = char_ops.get_char_num()
+ text = char_ops.encode(label)
+ if len(text) == 0 or len(text) > max_text_length:
+ return None
+ else:
+ if loss_type == "srn":
+ text_padded = [37] * max_text_length
+ for i in range(len(text)):
+ text_padded[i] = text[i]
+ lbl_weight[i] = [1.0]
+ text_padded = np.array(text_padded)
+ text = text_padded.reshape(-1, 1)
+ return (norm_img, text,encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2,lbl_weight)
+ else:
+ assert False, "Unsupport loss_type %s in process_image"\
+ % loss_type
+ return (norm_img, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1, gsrm_slf_attn_bias2)
diff --git a/ppocr/modeling/architectures/det_model.py b/ppocr/modeling/architectures/det_model.py
old mode 100755
new mode 100644
index 7ea0949bc80c55724182704616a93b879be92314..54d3a479f40a3f9f6ebb9e6ab739ae7a44796a2e
--- a/ppocr/modeling/architectures/det_model.py
+++ b/ppocr/modeling/architectures/det_model.py
@@ -97,6 +97,23 @@ class DetModel(object):
'shrink_mask':shrink_mask,\
'threshold_map':threshold_map,\
'threshold_mask':threshold_mask}
+ elif self.algorithm == "SAST":
+ input_score = fluid.layers.data(
+ name='score', shape=[1, 128, 128], dtype='float32')
+ input_border = fluid.layers.data(
+ name='border', shape=[5, 128, 128], dtype='float32')
+ input_mask = fluid.layers.data(
+ name='mask', shape=[1, 128, 128], dtype='float32')
+ input_tvo = fluid.layers.data(
+ name='tvo', shape=[9, 128, 128], dtype='float32')
+ input_tco = fluid.layers.data(
+ name='tco', shape=[3, 128, 128], dtype='float32')
+ feed_list = [image, input_score, input_border, input_mask, input_tvo, input_tco]
+ labels = {'input_score': input_score,\
+ 'input_border': input_border,\
+ 'input_mask': input_mask,\
+ 'input_tvo': input_tvo,\
+ 'input_tco': input_tco}
loader = fluid.io.DataLoader.from_generator(
feed_list=feed_list,
capacity=64,
diff --git a/ppocr/modeling/architectures/rec_model.py b/ppocr/modeling/architectures/rec_model.py
index e80a50ab6504c80aa7f10759576208486caf7c3f..fe2d4c16dce3882980fe2238ecc16c7c08a89792 100755
--- a/ppocr/modeling/architectures/rec_model.py
+++ b/ppocr/modeling/architectures/rec_model.py
@@ -58,6 +58,10 @@ class RecModel(object):
self.loss_type = global_params['loss_type']
self.image_shape = global_params['image_shape']
self.max_text_length = global_params['max_text_length']
+ if "num_heads" in global_params:
+ self.num_heads = global_params["num_heads"]
+ else:
+ self.num_heads = None
def create_feed(self, mode):
image_shape = deepcopy(self.image_shape)
@@ -77,6 +81,48 @@ class RecModel(object):
lod_level=1)
feed_list = [image, label_in, label_out]
labels = {'label_in': label_in, 'label_out': label_out}
+ elif self.loss_type == "srn":
+ encoder_word_pos = fluid.data(
+ name="encoder_word_pos",
+ shape=[
+ -1, int((image_shape[-2] / 8) * (image_shape[-1] / 8)),
+ 1
+ ],
+ dtype="int64")
+ gsrm_word_pos = fluid.data(
+ name="gsrm_word_pos",
+ shape=[-1, self.max_text_length, 1],
+ dtype="int64")
+ gsrm_slf_attn_bias1 = fluid.data(
+ name="gsrm_slf_attn_bias1",
+ shape=[
+ -1, self.num_heads, self.max_text_length,
+ self.max_text_length
+ ],
+ dtype="float32")
+ gsrm_slf_attn_bias2 = fluid.data(
+ name="gsrm_slf_attn_bias2",
+ shape=[
+ -1, self.num_heads, self.max_text_length,
+ self.max_text_length
+ ],
+ dtype="float32")
+ lbl_weight = fluid.layers.data(
+ name="lbl_weight", shape=[-1, 1], dtype='int64')
+ label = fluid.data(
+ name='label', shape=[-1, 1], dtype='int32', lod_level=1)
+ feed_list = [
+ image, label, encoder_word_pos, gsrm_word_pos,
+ gsrm_slf_attn_bias1, gsrm_slf_attn_bias2, lbl_weight
+ ]
+ labels = {
+ 'label': label,
+ 'encoder_word_pos': encoder_word_pos,
+ 'gsrm_word_pos': gsrm_word_pos,
+ 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1,
+ 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2,
+ 'lbl_weight': lbl_weight
+ }
else:
label = fluid.data(
name='label', shape=[None, 1], dtype='int32', lod_level=1)
@@ -88,6 +134,8 @@ class RecModel(object):
use_double_buffer=True,
iterable=False)
else:
+ labels = None
+ loader = None
if self.char_type == "ch" and self.infer_img:
image_shape[-1] = -1
if self.tps != None:
@@ -98,8 +146,42 @@ class RecModel(object):
)
image_shape = deepcopy(self.image_shape)
image = fluid.data(name='image', shape=image_shape, dtype='float32')
- labels = None
- loader = None
+ if self.loss_type == "srn":
+ encoder_word_pos = fluid.data(
+ name="encoder_word_pos",
+ shape=[
+ -1, int((image_shape[-2] / 8) * (image_shape[-1] / 8)),
+ 1
+ ],
+ dtype="int64")
+ gsrm_word_pos = fluid.data(
+ name="gsrm_word_pos",
+ shape=[-1, self.max_text_length, 1],
+ dtype="int64")
+ gsrm_slf_attn_bias1 = fluid.data(
+ name="gsrm_slf_attn_bias1",
+ shape=[
+ -1, self.num_heads, self.max_text_length,
+ self.max_text_length
+ ],
+ dtype="float32")
+ gsrm_slf_attn_bias2 = fluid.data(
+ name="gsrm_slf_attn_bias2",
+ shape=[
+ -1, self.num_heads, self.max_text_length,
+ self.max_text_length
+ ],
+ dtype="float32")
+ feed_list = [
+ image, encoder_word_pos, gsrm_word_pos, gsrm_slf_attn_bias1,
+ gsrm_slf_attn_bias2
+ ]
+ labels = {
+ 'encoder_word_pos': encoder_word_pos,
+ 'gsrm_word_pos': gsrm_word_pos,
+ 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1,
+ 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2
+ }
return image, labels, loader
def __call__(self, mode):
@@ -117,13 +199,27 @@ class RecModel(object):
label = labels['label_out']
else:
label = labels['label']
- outputs = {'total_loss':loss, 'decoded_out':\
- decoded_out, 'label':label}
+ if self.loss_type == 'srn':
+ total_loss, img_loss, word_loss = self.loss(predicts, labels)
+ outputs = {
+ 'total_loss': total_loss,
+ 'img_loss': img_loss,
+ 'word_loss': word_loss,
+ 'decoded_out': decoded_out,
+ 'label': label
+ }
+ else:
+ outputs = {'total_loss':loss, 'decoded_out':\
+ decoded_out, 'label':label}
return loader, outputs
+
elif mode == "export":
predict = predicts['predict']
if self.loss_type == "ctc":
predict = fluid.layers.softmax(predict)
+ if self.loss_type == "srn":
+ raise Exception(
+ "Warning! SRN does not support export model currently")
return [image, {'decoded_out': decoded_out, 'predicts': predict}]
else:
predict = predicts['predict']
diff --git a/ppocr/modeling/backbones/det_resnet_vd_sast.py b/ppocr/modeling/backbones/det_resnet_vd_sast.py
new file mode 100644
index 0000000000000000000000000000000000000000..14fe713852d009bef8dd7f3739c2e8070789e359
--- /dev/null
+++ b/ppocr/modeling/backbones/det_resnet_vd_sast.py
@@ -0,0 +1,274 @@
+#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle.fluid as fluid
+from paddle.fluid.param_attr import ParamAttr
+
+__all__ = ["ResNet"]
+
+
+class ResNet(object):
+ def __init__(self, params):
+ """
+ the Resnet backbone network for detection module.
+ Args:
+ params(dict): the super parameters for network build
+ """
+ self.layers = params['layers']
+ supported_layers = [18, 34, 50, 101, 152]
+ assert self.layers in supported_layers, \
+ "supported layers are {} but input layer is {}".format(supported_layers, self.layers)
+ self.is_3x3 = True
+
+ def __call__(self, input):
+ layers = self.layers
+ is_3x3 = self.is_3x3
+ # if layers == 18:
+ # depth = [2, 2, 2, 2]
+ # elif layers == 34 or layers == 50:
+ # depth = [3, 4, 6, 3]
+ # elif layers == 101:
+ # depth = [3, 4, 23, 3]
+ # elif layers == 152:
+ # depth = [3, 8, 36, 3]
+ # elif layers == 200:
+ # depth = [3, 12, 48, 3]
+ # num_filters = [64, 128, 256, 512]
+ # outs = []
+
+ if layers == 18:
+ depth = [2, 2, 2, 2]#, 3, 3]
+ elif layers == 34 or layers == 50:
+ #depth = [3, 4, 6, 3]#, 3, 3]
+ depth = [3, 4, 6, 3, 3]#, 3]
+ elif layers == 101:
+ depth = [3, 4, 23, 3]#, 3, 3]
+ elif layers == 152:
+ depth = [3, 8, 36, 3]#, 3, 3]
+ num_filters = [64, 128, 256, 512, 512]#, 512]
+ blocks = {}
+
+ idx = 'block_0'
+ blocks[idx] = input
+
+ if is_3x3 == False:
+ conv = self.conv_bn_layer(
+ input=input,
+ num_filters=64,
+ filter_size=7,
+ stride=2,
+ act='relu')
+ else:
+ conv = self.conv_bn_layer(
+ input=input,
+ num_filters=32,
+ filter_size=3,
+ stride=2,
+ act='relu',
+ name='conv1_1')
+ conv = self.conv_bn_layer(
+ input=conv,
+ num_filters=32,
+ filter_size=3,
+ stride=1,
+ act='relu',
+ name='conv1_2')
+ conv = self.conv_bn_layer(
+ input=conv,
+ num_filters=64,
+ filter_size=3,
+ stride=1,
+ act='relu',
+ name='conv1_3')
+ idx = 'block_1'
+ blocks[idx] = conv
+
+ conv = fluid.layers.pool2d(
+ input=conv,
+ pool_size=3,
+ pool_stride=2,
+ pool_padding=1,
+ pool_type='max')
+
+ if layers >= 50:
+ for block in range(len(depth)):
+ for i in range(depth[block]):
+ if layers in [101, 152, 200] and block == 2:
+ if i == 0:
+ conv_name = "res" + str(block + 2) + "a"
+ else:
+ conv_name = "res" + str(block + 2) + "b" + str(i)
+ else:
+ conv_name = "res" + str(block + 2) + chr(97 + i)
+ conv = self.bottleneck_block(
+ input=conv,
+ num_filters=num_filters[block],
+ stride=2 if i == 0 and block != 0 else 1,
+ if_first=block == i == 0,
+ name=conv_name)
+ # outs.append(conv)
+ idx = 'block_' + str(block + 2)
+ blocks[idx] = conv
+ else:
+ for block in range(len(depth)):
+ for i in range(depth[block]):
+ conv_name = "res" + str(block + 2) + chr(97 + i)
+ conv = self.basic_block(
+ input=conv,
+ num_filters=num_filters[block],
+ stride=2 if i == 0 and block != 0 else 1,
+ if_first=block == i == 0,
+ name=conv_name)
+ # outs.append(conv)
+ idx = 'block_' + str(block + 2)
+ blocks[idx] = conv
+ # return outs
+ return blocks
+
+ def conv_bn_layer(self,
+ input,
+ num_filters,
+ filter_size,
+ stride=1,
+ groups=1,
+ act=None,
+ name=None):
+ conv = fluid.layers.conv2d(
+ input=input,
+ num_filters=num_filters,
+ filter_size=filter_size,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ act=None,
+ param_attr=ParamAttr(name=name + "_weights"),
+ bias_attr=False)
+ if name == "conv1":
+ bn_name = "bn_" + name
+ else:
+ bn_name = "bn" + name[3:]
+ return fluid.layers.batch_norm(
+ input=conv,
+ act=act,
+ param_attr=ParamAttr(name=bn_name + '_scale'),
+ bias_attr=ParamAttr(bn_name + '_offset'),
+ moving_mean_name=bn_name + '_mean',
+ moving_variance_name=bn_name + '_variance')
+
+ def conv_bn_layer_new(self,
+ input,
+ num_filters,
+ filter_size,
+ stride=1,
+ groups=1,
+ act=None,
+ name=None):
+ pool = fluid.layers.pool2d(
+ input=input,
+ pool_size=2,
+ pool_stride=2,
+ pool_padding=0,
+ pool_type='avg',
+ ceil_mode=True)
+
+ conv = fluid.layers.conv2d(
+ input=pool,
+ num_filters=num_filters,
+ filter_size=filter_size,
+ stride=1,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ act=None,
+ param_attr=ParamAttr(name=name + "_weights"),
+ bias_attr=False)
+ if name == "conv1":
+ bn_name = "bn_" + name
+ else:
+ bn_name = "bn" + name[3:]
+ return fluid.layers.batch_norm(
+ input=conv,
+ act=act,
+ param_attr=ParamAttr(name=bn_name + '_scale'),
+ bias_attr=ParamAttr(bn_name + '_offset'),
+ moving_mean_name=bn_name + '_mean',
+ moving_variance_name=bn_name + '_variance')
+
+ def shortcut(self, input, ch_out, stride, name, if_first=False):
+ ch_in = input.shape[1]
+ if ch_in != ch_out or stride != 1:
+ if if_first:
+ return self.conv_bn_layer(input, ch_out, 1, stride, name=name)
+ else:
+ return self.conv_bn_layer_new(
+ input, ch_out, 1, stride, name=name)
+ elif if_first:
+ return self.conv_bn_layer(input, ch_out, 1, stride, name=name)
+ else:
+ return input
+
+ def bottleneck_block(self, input, num_filters, stride, name, if_first):
+ conv0 = self.conv_bn_layer(
+ input=input,
+ num_filters=num_filters,
+ filter_size=1,
+ act='relu',
+ name=name + "_branch2a")
+ conv1 = self.conv_bn_layer(
+ input=conv0,
+ num_filters=num_filters,
+ filter_size=3,
+ stride=stride,
+ act='relu',
+ name=name + "_branch2b")
+ conv2 = self.conv_bn_layer(
+ input=conv1,
+ num_filters=num_filters * 4,
+ filter_size=1,
+ act=None,
+ name=name + "_branch2c")
+
+ short = self.shortcut(
+ input,
+ num_filters * 4,
+ stride,
+ if_first=if_first,
+ name=name + "_branch1")
+
+ return fluid.layers.elementwise_add(x=short, y=conv2, act='relu')
+
+ def basic_block(self, input, num_filters, stride, name, if_first):
+ conv0 = self.conv_bn_layer(
+ input=input,
+ num_filters=num_filters,
+ filter_size=3,
+ act='relu',
+ stride=stride,
+ name=name + "_branch2a")
+ conv1 = self.conv_bn_layer(
+ input=conv0,
+ num_filters=num_filters,
+ filter_size=3,
+ act=None,
+ name=name + "_branch2b")
+ short = self.shortcut(
+ input,
+ num_filters,
+ stride,
+ if_first=if_first,
+ name=name + "_branch1")
+ return fluid.layers.elementwise_add(x=short, y=conv1, act='relu')
diff --git a/ppocr/modeling/backbones/rec_mobilenet_v3.py b/ppocr/modeling/backbones/rec_mobilenet_v3.py
index e504064242bfe6b0f433126ef4efb191b550a2eb..ff39a81210b7b71914f3c447b5e0035ac03db73b 100755
--- a/ppocr/modeling/backbones/rec_mobilenet_v3.py
+++ b/ppocr/modeling/backbones/rec_mobilenet_v3.py
@@ -31,16 +31,28 @@ __all__ = [
class MobileNetV3():
def __init__(self, params):
- self.scale = params['scale']
- model_name = params['model_name']
+ self.scale = params.get("scale", 0.5)
+ model_name = params.get("model_name", "small")
+ large_stride = params.get("large_stride", [1, 2, 2, 2])
+ small_stride = params.get("small_stride", [2, 2, 2, 2])
+
+ assert isinstance(large_stride, list), "large_stride type must " \
+ "be list but got {}".format(type(large_stride))
+ assert isinstance(small_stride, list), "small_stride type must " \
+ "be list but got {}".format(type(small_stride))
+ assert len(large_stride) == 4, "large_stride length must be " \
+ "4 but got {}".format(len(large_stride))
+ assert len(small_stride) == 4, "small_stride length must be " \
+ "4 but got {}".format(len(small_stride))
+
self.inplanes = 16
if model_name == "large":
self.cfg = [
# k, exp, c, se, nl, s,
- [3, 16, 16, False, 'relu', 1],
- [3, 64, 24, False, 'relu', (2, 1)],
+ [3, 16, 16, False, 'relu', large_stride[0]],
+ [3, 64, 24, False, 'relu', (large_stride[1], 1)],
[3, 72, 24, False, 'relu', 1],
- [5, 72, 40, True, 'relu', (2, 1)],
+ [5, 72, 40, True, 'relu', (large_stride[2], 1)],
[5, 120, 40, True, 'relu', 1],
[5, 120, 40, True, 'relu', 1],
[3, 240, 80, False, 'hard_swish', 1],
@@ -49,7 +61,7 @@ class MobileNetV3():
[3, 184, 80, False, 'hard_swish', 1],
[3, 480, 112, True, 'hard_swish', 1],
[3, 672, 112, True, 'hard_swish', 1],
- [5, 672, 160, True, 'hard_swish', (2, 1)],
+ [5, 672, 160, True, 'hard_swish', (large_stride[3], 1)],
[5, 960, 160, True, 'hard_swish', 1],
[5, 960, 160, True, 'hard_swish', 1],
]
@@ -58,15 +70,15 @@ class MobileNetV3():
elif model_name == "small":
self.cfg = [
# k, exp, c, se, nl, s,
- [3, 16, 16, True, 'relu', (2, 1)],
- [3, 72, 24, False, 'relu', (2, 1)],
+ [3, 16, 16, True, 'relu', (small_stride[0], 1)],
+ [3, 72, 24, False, 'relu', (small_stride[1], 1)],
[3, 88, 24, False, 'relu', 1],
- [5, 96, 40, True, 'hard_swish', (2, 1)],
+ [5, 96, 40, True, 'hard_swish', (small_stride[2], 1)],
[5, 240, 40, True, 'hard_swish', 1],
[5, 240, 40, True, 'hard_swish', 1],
[5, 120, 48, True, 'hard_swish', 1],
[5, 144, 48, True, 'hard_swish', 1],
- [5, 288, 96, True, 'hard_swish', (2, 1)],
+ [5, 288, 96, True, 'hard_swish', (small_stride[3], 1)],
[5, 576, 96, True, 'hard_swish', 1],
[5, 576, 96, True, 'hard_swish', 1],
]
diff --git a/ppocr/modeling/backbones/rec_resnet50_fpn.py b/ppocr/modeling/backbones/rec_resnet50_fpn.py
new file mode 100755
index 0000000000000000000000000000000000000000..f6d72377fe4e2d3355a4510f070178ad48dd2a27
--- /dev/null
+++ b/ppocr/modeling/backbones/rec_resnet50_fpn.py
@@ -0,0 +1,172 @@
+#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import paddle
+import paddle.fluid as fluid
+from paddle.fluid.param_attr import ParamAttr
+
+
+__all__ = ["ResNet", "ResNet18", "ResNet34", "ResNet50", "ResNet101", "ResNet152"]
+
+Trainable = True
+w_nolr = fluid.ParamAttr(
+ trainable = Trainable)
+train_parameters = {
+ "input_size": [3, 224, 224],
+ "input_mean": [0.485, 0.456, 0.406],
+ "input_std": [0.229, 0.224, 0.225],
+ "learning_strategy": {
+ "name": "piecewise_decay",
+ "batch_size": 256,
+ "epochs": [30, 60, 90],
+ "steps": [0.1, 0.01, 0.001, 0.0001]
+ }
+}
+
+class ResNet():
+ def __init__(self, params):
+ self.layers = params['layers']
+ self.params = train_parameters
+
+
+ def __call__(self, input):
+ layers = self.layers
+ supported_layers = [18, 34, 50, 101, 152]
+ assert layers in supported_layers, \
+ "supported layers are {} but input layer is {}".format(supported_layers, layers)
+
+ if layers == 18:
+ depth = [2, 2, 2, 2]
+ elif layers == 34 or layers == 50:
+ depth = [3, 4, 6, 3]
+ elif layers == 101:
+ depth = [3, 4, 23, 3]
+ elif layers == 152:
+ depth = [3, 8, 36, 3]
+ stride_list = [(2,2),(2,2),(1,1),(1,1)]
+ num_filters = [64, 128, 256, 512]
+
+ conv = self.conv_bn_layer(
+ input=input, num_filters=64, filter_size=7, stride=2, act='relu', name="conv1")
+ F = []
+ if layers >= 50:
+ for block in range(len(depth)):
+ for i in range(depth[block]):
+ if layers in [101, 152] and block == 2:
+ if i == 0:
+ conv_name = "res" + str(block + 2) + "a"
+ else:
+ conv_name = "res" + str(block + 2) + "b" + str(i)
+ else:
+ conv_name = "res" + str(block + 2) + chr(97 + i)
+ conv = self.bottleneck_block(
+ input=conv,
+ num_filters=num_filters[block],
+ stride=stride_list[block] if i == 0 else 1, name=conv_name)
+ F.append(conv)
+
+ base = F[-1]
+ for i in [-2, -3]:
+ b, c, w, h = F[i].shape
+ if (w,h) == base.shape[2:]:
+ base = base
+ else:
+ base = fluid.layers.conv2d_transpose( input=base, num_filters=c,filter_size=4, stride=2,
+ padding=1,act=None,
+ param_attr=w_nolr,
+ bias_attr=w_nolr)
+ base = fluid.layers.batch_norm(base, act = "relu", param_attr=w_nolr, bias_attr=w_nolr)
+ base = fluid.layers.concat([base, F[i]], axis=1)
+ base = fluid.layers.conv2d(base, num_filters=c, filter_size=1, param_attr=w_nolr, bias_attr=w_nolr)
+ base = fluid.layers.conv2d(base, num_filters=c, filter_size=3,padding = 1, param_attr=w_nolr, bias_attr=w_nolr)
+ base = fluid.layers.batch_norm(base, act = "relu", param_attr=w_nolr, bias_attr=w_nolr)
+
+ base = fluid.layers.conv2d(base, num_filters=512, filter_size=1,bias_attr=w_nolr,param_attr=w_nolr)
+
+ return base
+
+ def conv_bn_layer(self,
+ input,
+ num_filters,
+ filter_size,
+ stride=1,
+ groups=1,
+ act=None,
+ name=None):
+ conv = fluid.layers.conv2d(
+ input=input,
+ num_filters=num_filters,
+ filter_size= 2 if stride==(1,1) else filter_size,
+ dilation = 2 if stride==(1,1) else 1,
+ stride=stride,
+ padding=(filter_size - 1) // 2,
+ groups=groups,
+ act=None,
+ param_attr=ParamAttr(name=name + "_weights",trainable = Trainable),
+ bias_attr=False,
+ name=name + '.conv2d.output.1')
+
+ if name == "conv1":
+ bn_name = "bn_" + name
+ else:
+ bn_name = "bn" + name[3:]
+ return fluid.layers.batch_norm(input=conv,
+ act=act,
+ name=bn_name + '.output.1',
+ param_attr=ParamAttr(name=bn_name + '_scale',trainable = Trainable),
+ bias_attr=ParamAttr(bn_name + '_offset',trainable = Trainable),
+ moving_mean_name=bn_name + '_mean',
+ moving_variance_name=bn_name + '_variance', )
+
+ def shortcut(self, input, ch_out, stride, is_first, name):
+ ch_in = input.shape[1]
+ if ch_in != ch_out or stride != 1 or is_first == True:
+ if stride == (1,1):
+ return self.conv_bn_layer(input, ch_out, 1, 1, name=name)
+ else: #stride == (2,2)
+ return self.conv_bn_layer(input, ch_out, 1, stride, name=name)
+
+ else:
+ return input
+
+ def bottleneck_block(self, input, num_filters, stride, name):
+ conv0 = self.conv_bn_layer(
+ input=input, num_filters=num_filters, filter_size=1, act='relu', name=name + "_branch2a")
+ conv1 = self.conv_bn_layer(
+ input=conv0,
+ num_filters=num_filters,
+ filter_size=3,
+ stride=stride,
+ act='relu',
+ name=name + "_branch2b")
+ conv2 = self.conv_bn_layer(
+ input=conv1, num_filters=num_filters * 4, filter_size=1, act=None, name=name + "_branch2c")
+
+ short = self.shortcut(input, num_filters * 4, stride, is_first=False, name=name + "_branch1")
+
+ return fluid.layers.elementwise_add(x=short, y=conv2, act='relu', name=name + ".add.output.5")
+
+ def basic_block(self, input, num_filters, stride, is_first, name):
+ conv0 = self.conv_bn_layer(input=input, num_filters=num_filters, filter_size=3, act='relu', stride=stride,
+ name=name + "_branch2a")
+ conv1 = self.conv_bn_layer(input=conv0, num_filters=num_filters, filter_size=3, act=None,
+ name=name + "_branch2b")
+ short = self.shortcut(input, num_filters, stride, is_first, name=name + "_branch1")
+ return fluid.layers.elementwise_add(x=short, y=conv1, act='relu')
diff --git a/ppocr/modeling/heads/det_sast_head.py b/ppocr/modeling/heads/det_sast_head.py
new file mode 100644
index 0000000000000000000000000000000000000000..0097913dd7e08c76c45064940416e7c9ffc32f26
--- /dev/null
+++ b/ppocr/modeling/heads/det_sast_head.py
@@ -0,0 +1,228 @@
+#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle.fluid as fluid
+from ..common_functions import conv_bn_layer, deconv_bn_layer
+from collections import OrderedDict
+
+
+class SASTHead(object):
+ """
+ SAST:
+ see arxiv: https://arxiv.org/abs/1908.05498
+ args:
+ params(dict): the super parameters for network build
+ """
+
+ def __init__(self, params):
+ self.model_name = params['model_name']
+ self.with_cab = params['with_cab']
+
+ def FPN_Up_Fusion(self, blocks):
+ """
+ blocks{}: contain block_2, block_3, block_4, block_5, block_6, block_7 with
+ 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 resolution.
+ """
+ f = [blocks['block_6'], blocks['block_5'], blocks['block_4'], blocks['block_3'], blocks['block_2']]
+ num_outputs = [256, 256, 192, 192, 128]
+ g = [None, None, None, None, None]
+ h = [None, None, None, None, None]
+ for i in range(5):
+ h[i] = conv_bn_layer(input=f[i], num_filters=num_outputs[i],
+ filter_size=1, stride=1, act=None, name='fpn_up_h'+str(i))
+
+ for i in range(4):
+ if i == 0:
+ g[i] = deconv_bn_layer(input=h[i], num_filters=num_outputs[i + 1], act=None, name='fpn_up_g0')
+ #print("g[{}] shape: {}".format(i, g[i].shape))
+ else:
+ g[i] = fluid.layers.elementwise_add(x=g[i - 1], y=h[i])
+ g[i] = fluid.layers.relu(g[i])
+ #g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i],
+ # filter_size=1, stride=1, act='relu')
+ g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i],
+ filter_size=3, stride=1, act='relu', name='fpn_up_g%d_1'%i)
+ g[i] = deconv_bn_layer(input=g[i], num_filters=num_outputs[i + 1], act=None, name='fpn_up_g%d_2'%i)
+ #print("g[{}] shape: {}".format(i, g[i].shape))
+
+ g[4] = fluid.layers.elementwise_add(x=g[3], y=h[4])
+ g[4] = fluid.layers.relu(g[4])
+ g[4] = conv_bn_layer(input=g[4], num_filters=num_outputs[4],
+ filter_size=3, stride=1, act='relu', name='fpn_up_fusion_1')
+ g[4] = conv_bn_layer(input=g[4], num_filters=num_outputs[4],
+ filter_size=1, stride=1, act=None, name='fpn_up_fusion_2')
+
+ return g[4]
+
+ def FPN_Down_Fusion(self, blocks):
+ """
+ blocks{}: contain block_2, block_3, block_4, block_5, block_6, block_7 with
+ 1/4, 1/8, 1/16, 1/32, 1/64, 1/128 resolution.
+ """
+ f = [blocks['block_0'], blocks['block_1'], blocks['block_2']]
+ num_outputs = [32, 64, 128]
+ g = [None, None, None]
+ h = [None, None, None]
+ for i in range(3):
+ h[i] = conv_bn_layer(input=f[i], num_filters=num_outputs[i],
+ filter_size=3, stride=1, act=None, name='fpn_down_h'+str(i))
+ for i in range(2):
+ if i == 0:
+ g[i] = conv_bn_layer(input=h[i], num_filters=num_outputs[i+1], filter_size=3, stride=2, act=None, name='fpn_down_g0')
+ else:
+ g[i] = fluid.layers.elementwise_add(x=g[i - 1], y=h[i])
+ g[i] = fluid.layers.relu(g[i])
+ g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i], filter_size=3, stride=1, act='relu', name='fpn_down_g%d_1'%i)
+ g[i] = conv_bn_layer(input=g[i], num_filters=num_outputs[i+1], filter_size=3, stride=2, act=None, name='fpn_down_g%d_2'%i)
+ # print("g[{}] shape: {}".format(i, g[i].shape))
+ g[2] = fluid.layers.elementwise_add(x=g[1], y=h[2])
+ g[2] = fluid.layers.relu(g[2])
+ g[2] = conv_bn_layer(input=g[2], num_filters=num_outputs[2],
+ filter_size=3, stride=1, act='relu', name='fpn_down_fusion_1')
+ g[2] = conv_bn_layer(input=g[2], num_filters=num_outputs[2],
+ filter_size=1, stride=1, act=None, name='fpn_down_fusion_2')
+ return g[2]
+
+ def SAST_Header1(self, f_common):
+ """Detector header."""
+ #f_score
+ f_score = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_score1')
+ f_score = conv_bn_layer(input=f_score, num_filters=64, filter_size=3, stride=1, act='relu', name='f_score2')
+ f_score = conv_bn_layer(input=f_score, num_filters=128, filter_size=1, stride=1, act='relu', name='f_score3')
+ f_score = conv_bn_layer(input=f_score, num_filters=1, filter_size=3, stride=1, name='f_score4')
+ f_score = fluid.layers.sigmoid(f_score)
+ # print("f_score shape: {}".format(f_score.shape))
+
+ #f_boder
+ f_border = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_border1')
+ f_border = conv_bn_layer(input=f_border, num_filters=64, filter_size=3, stride=1, act='relu', name='f_border2')
+ f_border = conv_bn_layer(input=f_border, num_filters=128, filter_size=1, stride=1, act='relu', name='f_border3')
+ f_border = conv_bn_layer(input=f_border, num_filters=4, filter_size=3, stride=1, name='f_border4')
+ # print("f_border shape: {}".format(f_border.shape))
+
+ return f_score, f_border
+
+ def SAST_Header2(self, f_common):
+ """Detector header."""
+ #f_tvo
+ f_tvo = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_tvo1')
+ f_tvo = conv_bn_layer(input=f_tvo, num_filters=64, filter_size=3, stride=1, act='relu', name='f_tvo2')
+ f_tvo = conv_bn_layer(input=f_tvo, num_filters=128, filter_size=1, stride=1, act='relu', name='f_tvo3')
+ f_tvo = conv_bn_layer(input=f_tvo, num_filters=8, filter_size=3, stride=1, name='f_tvo4')
+ # print("f_tvo shape: {}".format(f_tvo.shape))
+
+ #f_tco
+ f_tco = conv_bn_layer(input=f_common, num_filters=64, filter_size=1, stride=1, act='relu', name='f_tco1')
+ f_tco = conv_bn_layer(input=f_tco, num_filters=64, filter_size=3, stride=1, act='relu', name='f_tco2')
+ f_tco = conv_bn_layer(input=f_tco, num_filters=128, filter_size=1, stride=1, act='relu', name='f_tco3')
+ f_tco = conv_bn_layer(input=f_tco, num_filters=2, filter_size=3, stride=1, name='f_tco4')
+ # print("f_tco shape: {}".format(f_tco.shape))
+
+ return f_tvo, f_tco
+
+ def cross_attention(self, f_common):
+ """
+ """
+ f_shape = fluid.layers.shape(f_common)
+ f_theta = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_theta')
+ f_phi = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_phi')
+ f_g = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, act='relu', name='f_g')
+ ### horizon
+ fh_theta = f_theta
+ fh_phi = f_phi
+ fh_g = f_g
+ #flatten
+ fh_theta = fluid.layers.transpose(fh_theta, [0, 2, 3, 1])
+ fh_theta = fluid.layers.reshape(fh_theta, [f_shape[0] * f_shape[2], f_shape[3], 128])
+ fh_phi = fluid.layers.transpose(fh_phi, [0, 2, 3, 1])
+ fh_phi = fluid.layers.reshape(fh_phi, [f_shape[0] * f_shape[2], f_shape[3], 128])
+ fh_g = fluid.layers.transpose(fh_g, [0, 2, 3, 1])
+ fh_g = fluid.layers.reshape(fh_g, [f_shape[0] * f_shape[2], f_shape[3], 128])
+ #correlation
+ fh_attn = fluid.layers.matmul(fh_theta, fluid.layers.transpose(fh_phi, [0, 2, 1]))
+ #scale
+ fh_attn = fh_attn / (128 ** 0.5)
+ fh_attn = fluid.layers.softmax(fh_attn)
+ #weighted sum
+ fh_weight = fluid.layers.matmul(fh_attn, fh_g)
+ fh_weight = fluid.layers.reshape(fh_weight, [f_shape[0], f_shape[2], f_shape[3], 128])
+ # print("fh_weight: {}".format(fh_weight.shape))
+ fh_weight = fluid.layers.transpose(fh_weight, [0, 3, 1, 2])
+ fh_weight = conv_bn_layer(input=fh_weight, num_filters=128, filter_size=1, stride=1, name='fh_weight')
+ #short cut
+ fh_sc = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, name='fh_sc')
+ f_h = fluid.layers.relu(fh_weight + fh_sc)
+ ######
+ #vertical
+ fv_theta = fluid.layers.transpose(f_theta, [0, 1, 3, 2])
+ fv_phi = fluid.layers.transpose(f_phi, [0, 1, 3, 2])
+ fv_g = fluid.layers.transpose(f_g, [0, 1, 3, 2])
+ #flatten
+ fv_theta = fluid.layers.transpose(fv_theta, [0, 2, 3, 1])
+ fv_theta = fluid.layers.reshape(fv_theta, [f_shape[0] * f_shape[3], f_shape[2], 128])
+ fv_phi = fluid.layers.transpose(fv_phi, [0, 2, 3, 1])
+ fv_phi = fluid.layers.reshape(fv_phi, [f_shape[0] * f_shape[3], f_shape[2], 128])
+ fv_g = fluid.layers.transpose(fv_g, [0, 2, 3, 1])
+ fv_g = fluid.layers.reshape(fv_g, [f_shape[0] * f_shape[3], f_shape[2], 128])
+ #correlation
+ fv_attn = fluid.layers.matmul(fv_theta, fluid.layers.transpose(fv_phi, [0, 2, 1]))
+ #scale
+ fv_attn = fv_attn / (128 ** 0.5)
+ fv_attn = fluid.layers.softmax(fv_attn)
+ #weighted sum
+ fv_weight = fluid.layers.matmul(fv_attn, fv_g)
+ fv_weight = fluid.layers.reshape(fv_weight, [f_shape[0], f_shape[3], f_shape[2], 128])
+ # print("fv_weight: {}".format(fv_weight.shape))
+ fv_weight = fluid.layers.transpose(fv_weight, [0, 3, 2, 1])
+ fv_weight = conv_bn_layer(input=fv_weight, num_filters=128, filter_size=1, stride=1, name='fv_weight')
+ #short cut
+ fv_sc = conv_bn_layer(input=f_common, num_filters=128, filter_size=1, stride=1, name='fv_sc')
+ f_v = fluid.layers.relu(fv_weight + fv_sc)
+ ######
+ f_attn = fluid.layers.concat([f_h, f_v], axis=1)
+ f_attn = conv_bn_layer(input=f_attn, num_filters=128, filter_size=1, stride=1, act='relu', name='f_attn')
+ return f_attn
+
+ def __call__(self, blocks, with_cab=False):
+ # for k, v in blocks.items():
+ # print(k, v.shape)
+
+ #down fpn
+ f_down = self.FPN_Down_Fusion(blocks)
+ # print("f_down shape: {}".format(f_down.shape))
+ #up fpn
+ f_up = self.FPN_Up_Fusion(blocks)
+ # print("f_up shape: {}".format(f_up.shape))
+ #fusion
+ f_common = fluid.layers.elementwise_add(x=f_down, y=f_up)
+ f_common = fluid.layers.relu(f_common)
+ # print("f_common: {}".format(f_common.shape))
+
+ if self.with_cab:
+ # print('enhence f_common with CAB.')
+ f_common = self.cross_attention(f_common)
+
+ f_score, f_border= self.SAST_Header1(f_common)
+ f_tvo, f_tco = self.SAST_Header2(f_common)
+
+ predicts = OrderedDict()
+ predicts['f_score'] = f_score
+ predicts['f_border'] = f_border
+ predicts['f_tvo'] = f_tvo
+ predicts['f_tco'] = f_tco
+ return predicts
\ No newline at end of file
diff --git a/ppocr/modeling/heads/rec_ctc_head.py b/ppocr/modeling/heads/rec_ctc_head.py
index 37b4b00f8a16b219c978c685ea8a0d37234b40e4..6b8635e4647f186390179b880e132641342df0d6 100755
--- a/ppocr/modeling/heads/rec_ctc_head.py
+++ b/ppocr/modeling/heads/rec_ctc_head.py
@@ -32,6 +32,7 @@ class CTCPredict(object):
self.char_num = params['char_num']
self.encoder = SequenceEncoder(params)
self.encoder_type = params['encoder_type']
+ self.fc_decay = params.get("fc_decay", 0.0004)
def __call__(self, inputs, labels=None, mode=None):
encoder_features = self.encoder(inputs)
@@ -39,7 +40,7 @@ class CTCPredict(object):
encoder_features = fluid.layers.concat(encoder_features, axis=1)
name = "ctc_fc"
para_attr, bias_attr = get_para_bias_attr(
- l2_decay=0.0004, k=encoder_features.shape[1], name=name)
+ l2_decay=self.fc_decay, k=encoder_features.shape[1], name=name)
predict = fluid.layers.fc(input=encoder_features,
size=self.char_num + 1,
param_attr=para_attr,
diff --git a/ppocr/modeling/heads/rec_srn_all_head.py b/ppocr/modeling/heads/rec_srn_all_head.py
new file mode 100755
index 0000000000000000000000000000000000000000..e1bb955d437faca243e3768e24b47f4328218624
--- /dev/null
+++ b/ppocr/modeling/heads/rec_srn_all_head.py
@@ -0,0 +1,230 @@
+#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import paddle
+import paddle.fluid as fluid
+from paddle.fluid.param_attr import ParamAttr
+import numpy as np
+from .self_attention.model import wrap_encoder
+from .self_attention.model import wrap_encoder_forFeature
+gradient_clip = 10
+
+
+class SRNPredict(object):
+ def __init__(self, params):
+ super(SRNPredict, self).__init__()
+ self.char_num = params['char_num']
+ self.max_length = params['max_text_length']
+
+ self.num_heads = params['num_heads']
+ self.num_encoder_TUs = params['num_encoder_TUs']
+ self.num_decoder_TUs = params['num_decoder_TUs']
+ self.hidden_dims = params['hidden_dims']
+
+ def pvam(self, inputs, others):
+
+ b, c, h, w = inputs.shape
+ conv_features = fluid.layers.reshape(x=inputs, shape=[-1, c, h * w])
+ conv_features = fluid.layers.transpose(x=conv_features, perm=[0, 2, 1])
+
+ #===== Transformer encoder =====
+ b, t, c = conv_features.shape
+ encoder_word_pos = others["encoder_word_pos"]
+ gsrm_word_pos = others["gsrm_word_pos"]
+
+ enc_inputs = [conv_features, encoder_word_pos, None]
+ word_features = wrap_encoder_forFeature(
+ src_vocab_size=-1,
+ max_length=t,
+ n_layer=self.num_encoder_TUs,
+ n_head=self.num_heads,
+ d_key=int(self.hidden_dims / self.num_heads),
+ d_value=int(self.hidden_dims / self.num_heads),
+ d_model=self.hidden_dims,
+ d_inner_hid=self.hidden_dims,
+ prepostprocess_dropout=0.1,
+ attention_dropout=0.1,
+ relu_dropout=0.1,
+ preprocess_cmd="n",
+ postprocess_cmd="da",
+ weight_sharing=True,
+ enc_inputs=enc_inputs, )
+ fluid.clip.set_gradient_clip(
+ fluid.clip.GradientClipByValue(gradient_clip))
+
+ #===== Parallel Visual Attention Module =====
+ b, t, c = word_features.shape
+
+ word_features = fluid.layers.fc(word_features, c, num_flatten_dims=2)
+ word_features_ = fluid.layers.reshape(word_features, [-1, 1, t, c])
+ word_features_ = fluid.layers.expand(word_features_,
+ [1, self.max_length, 1, 1])
+ word_pos_feature = fluid.layers.embedding(gsrm_word_pos,
+ [self.max_length, c])
+ word_pos_ = fluid.layers.reshape(word_pos_feature,
+ [-1, self.max_length, 1, c])
+ word_pos_ = fluid.layers.expand(word_pos_, [1, 1, t, 1])
+ temp = fluid.layers.elementwise_add(
+ word_features_, word_pos_, act='tanh')
+
+ attention_weight = fluid.layers.fc(input=temp,
+ size=1,
+ num_flatten_dims=3,
+ bias_attr=False)
+ attention_weight = fluid.layers.reshape(
+ x=attention_weight, shape=[-1, self.max_length, t])
+ attention_weight = fluid.layers.softmax(input=attention_weight, axis=-1)
+
+ pvam_features = fluid.layers.matmul(attention_weight,
+ word_features) #[b, max_length, c]
+
+ return pvam_features
+
+ def gsrm(self, pvam_features, others):
+
+ #===== GSRM Visual-to-semantic embedding block =====
+ b, t, c = pvam_features.shape
+ word_out = fluid.layers.fc(
+ input=fluid.layers.reshape(pvam_features, [-1, c]),
+ size=self.char_num,
+ act="softmax")
+ #word_out.stop_gradient = True
+ word_ids = fluid.layers.argmax(word_out, axis=1)
+ word_ids.stop_gradient = True
+ word_ids = fluid.layers.reshape(x=word_ids, shape=[-1, t, 1])
+
+ #===== GSRM Semantic reasoning block =====
+ """
+ This module is achieved through bi-transformers,
+ ngram_feature1 is the froward one, ngram_fetaure2 is the backward one
+ """
+ pad_idx = self.char_num
+ gsrm_word_pos = others["gsrm_word_pos"]
+ gsrm_slf_attn_bias1 = others["gsrm_slf_attn_bias1"]
+ gsrm_slf_attn_bias2 = others["gsrm_slf_attn_bias2"]
+
+ def prepare_bi(word_ids):
+ """
+ prepare bi for gsrm
+ word1 for forward; word2 for backward
+ """
+ word1 = fluid.layers.cast(word_ids, "float32")
+ word1 = fluid.layers.pad(word1, [0, 0, 1, 0, 0, 0],
+ pad_value=1.0 * pad_idx)
+ word1 = fluid.layers.cast(word1, "int64")
+ word1 = word1[:, :-1, :]
+ word2 = word_ids
+ return word1, word2
+
+ word1, word2 = prepare_bi(word_ids)
+ word1.stop_gradient = True
+ word2.stop_gradient = True
+ enc_inputs_1 = [word1, gsrm_word_pos, gsrm_slf_attn_bias1]
+ enc_inputs_2 = [word2, gsrm_word_pos, gsrm_slf_attn_bias2]
+
+ gsrm_feature1 = wrap_encoder(
+ src_vocab_size=self.char_num + 1,
+ max_length=self.max_length,
+ n_layer=self.num_decoder_TUs,
+ n_head=self.num_heads,
+ d_key=int(self.hidden_dims / self.num_heads),
+ d_value=int(self.hidden_dims / self.num_heads),
+ d_model=self.hidden_dims,
+ d_inner_hid=self.hidden_dims,
+ prepostprocess_dropout=0.1,
+ attention_dropout=0.1,
+ relu_dropout=0.1,
+ preprocess_cmd="n",
+ postprocess_cmd="da",
+ weight_sharing=True,
+ enc_inputs=enc_inputs_1, )
+ gsrm_feature2 = wrap_encoder(
+ src_vocab_size=self.char_num + 1,
+ max_length=self.max_length,
+ n_layer=self.num_decoder_TUs,
+ n_head=self.num_heads,
+ d_key=int(self.hidden_dims / self.num_heads),
+ d_value=int(self.hidden_dims / self.num_heads),
+ d_model=self.hidden_dims,
+ d_inner_hid=self.hidden_dims,
+ prepostprocess_dropout=0.1,
+ attention_dropout=0.1,
+ relu_dropout=0.1,
+ preprocess_cmd="n",
+ postprocess_cmd="da",
+ weight_sharing=True,
+ enc_inputs=enc_inputs_2, )
+ gsrm_feature2 = fluid.layers.pad(gsrm_feature2, [0, 0, 0, 1, 0, 0],
+ pad_value=0.)
+ gsrm_feature2 = gsrm_feature2[:, 1:, ]
+ gsrm_features = gsrm_feature1 + gsrm_feature2
+
+ b, t, c = gsrm_features.shape
+
+ gsrm_out = fluid.layers.matmul(
+ x=gsrm_features,
+ y=fluid.default_main_program().global_block().var(
+ "src_word_emb_table"),
+ transpose_y=True)
+ b, t, c = gsrm_out.shape
+ gsrm_out = fluid.layers.softmax(input=fluid.layers.reshape(gsrm_out,
+ [-1, c]))
+
+ return gsrm_features, word_out, gsrm_out
+
+ def vsfd(self, pvam_features, gsrm_features):
+
+ #===== Visual-Semantic Fusion Decoder Module =====
+ b, t, c1 = pvam_features.shape
+ b, t, c2 = gsrm_features.shape
+ combine_features_ = fluid.layers.concat(
+ [pvam_features, gsrm_features], axis=2)
+ img_comb_features_ = fluid.layers.reshape(
+ x=combine_features_, shape=[-1, c1 + c2])
+ img_comb_features_map = fluid.layers.fc(input=img_comb_features_,
+ size=c1,
+ act="sigmoid")
+ img_comb_features_map = fluid.layers.reshape(
+ x=img_comb_features_map, shape=[-1, t, c1])
+ combine_features = img_comb_features_map * pvam_features + (
+ 1.0 - img_comb_features_map) * gsrm_features
+ img_comb_features = fluid.layers.reshape(
+ x=combine_features, shape=[-1, c1])
+
+ fc_out = fluid.layers.fc(input=img_comb_features,
+ size=self.char_num,
+ act="softmax")
+ return fc_out
+
+ def __call__(self, inputs, others, mode=None):
+
+ pvam_features = self.pvam(inputs, others)
+ gsrm_features, word_out, gsrm_out = self.gsrm(pvam_features, others)
+ final_out = self.vsfd(pvam_features, gsrm_features)
+
+ _, decoded_out = fluid.layers.topk(input=final_out, k=1)
+ predicts = {
+ 'predict': final_out,
+ 'decoded_out': decoded_out,
+ 'word_out': word_out,
+ 'gsrm_out': gsrm_out
+ }
+
+ return predicts
diff --git a/ppocr/modeling/heads/self_attention/__init__.py b/ppocr/modeling/heads/self_attention/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..e69de29bb2d1d6434b8b29ae775ad8c2e48c5391
diff --git a/ppocr/modeling/heads/self_attention/model.py b/ppocr/modeling/heads/self_attention/model.py
new file mode 100644
index 0000000000000000000000000000000000000000..8ac1458b7dca3dedc368a16fa00f52a9aa4f4f93
--- /dev/null
+++ b/ppocr/modeling/heads/self_attention/model.py
@@ -0,0 +1,1058 @@
+from functools import partial
+import numpy as np
+
+import paddle.fluid as fluid
+import paddle.fluid.layers as layers
+
+# Set seed for CE
+dropout_seed = None
+
+
+def wrap_layer_with_block(layer, block_idx):
+ """
+ Make layer define support indicating block, by which we can add layers
+ to other blocks within current block. This will make it easy to define
+ cache among while loop.
+ """
+
+ class BlockGuard(object):
+ """
+ BlockGuard class.
+
+ BlockGuard class is used to switch to the given block in a program by
+ using the Python `with` keyword.
+ """
+
+ def __init__(self, block_idx=None, main_program=None):
+ self.main_program = fluid.default_main_program(
+ ) if main_program is None else main_program
+ self.old_block_idx = self.main_program.current_block().idx
+ self.new_block_idx = block_idx
+
+ def __enter__(self):
+ self.main_program.current_block_idx = self.new_block_idx
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.main_program.current_block_idx = self.old_block_idx
+ if exc_type is not None:
+ return False # re-raise exception
+ return True
+
+ def layer_wrapper(*args, **kwargs):
+ with BlockGuard(block_idx):
+ return layer(*args, **kwargs)
+
+ return layer_wrapper
+
+
+def position_encoding_init(n_position, d_pos_vec):
+ """
+ Generate the initial values for the sinusoid position encoding table.
+ """
+ channels = d_pos_vec
+ position = np.arange(n_position)
+ num_timescales = channels // 2
+ log_timescale_increment = (np.log(float(1e4) / float(1)) /
+ (num_timescales - 1))
+ inv_timescales = np.exp(np.arange(
+ num_timescales)) * -log_timescale_increment
+ scaled_time = np.expand_dims(position, 1) * np.expand_dims(inv_timescales,
+ 0)
+ signal = np.concatenate([np.sin(scaled_time), np.cos(scaled_time)], axis=1)
+ signal = np.pad(signal, [[0, 0], [0, np.mod(channels, 2)]], 'constant')
+ position_enc = signal
+ return position_enc.astype("float32")
+
+
+def multi_head_attention(queries,
+ keys,
+ values,
+ attn_bias,
+ d_key,
+ d_value,
+ d_model,
+ n_head=1,
+ dropout_rate=0.,
+ cache=None,
+ gather_idx=None,
+ static_kv=False):
+ """
+ Multi-Head Attention. Note that attn_bias is added to the logit before
+ computing softmax activiation to mask certain selected positions so that
+ they will not considered in attention weights.
+ """
+ keys = queries if keys is None else keys
+ values = keys if values is None else values
+
+ if not (len(queries.shape) == len(keys.shape) == len(values.shape) == 3):
+ raise ValueError(
+ "Inputs: quries, keys and values should all be 3-D tensors.")
+
+ def __compute_qkv(queries, keys, values, n_head, d_key, d_value):
+ """
+ Add linear projection to queries, keys, and values.
+ """
+ q = layers.fc(input=queries,
+ size=d_key * n_head,
+ bias_attr=False,
+ num_flatten_dims=2)
+ # For encoder-decoder attention in inference, insert the ops and vars
+ # into global block to use as cache among beam search.
+ fc_layer = wrap_layer_with_block(
+ layers.fc, fluid.default_main_program().current_block()
+ .parent_idx) if cache is not None and static_kv else layers.fc
+ k = fc_layer(
+ input=keys,
+ size=d_key * n_head,
+ bias_attr=False,
+ num_flatten_dims=2)
+ v = fc_layer(
+ input=values,
+ size=d_value * n_head,
+ bias_attr=False,
+ num_flatten_dims=2)
+ return q, k, v
+
+ def __split_heads_qkv(queries, keys, values, n_head, d_key, d_value):
+ """
+ Reshape input tensors at the last dimension to split multi-heads
+ and then transpose. Specifically, transform the input tensor with shape
+ [bs, max_sequence_length, n_head * hidden_dim] to the output tensor
+ with shape [bs, n_head, max_sequence_length, hidden_dim].
+ """
+ # The value 0 in shape attr means copying the corresponding dimension
+ # size of the input as the output dimension size.
+ reshaped_q = layers.reshape(
+ x=queries, shape=[0, 0, n_head, d_key], inplace=True)
+ # permuate the dimensions into:
+ # [batch_size, n_head, max_sequence_len, hidden_size_per_head]
+ q = layers.transpose(x=reshaped_q, perm=[0, 2, 1, 3])
+ # For encoder-decoder attention in inference, insert the ops and vars
+ # into global block to use as cache among beam search.
+ reshape_layer = wrap_layer_with_block(
+ layers.reshape,
+ fluid.default_main_program().current_block()
+ .parent_idx) if cache is not None and static_kv else layers.reshape
+ transpose_layer = wrap_layer_with_block(
+ layers.transpose,
+ fluid.default_main_program().current_block().
+ parent_idx) if cache is not None and static_kv else layers.transpose
+ reshaped_k = reshape_layer(
+ x=keys, shape=[0, 0, n_head, d_key], inplace=True)
+ k = transpose_layer(x=reshaped_k, perm=[0, 2, 1, 3])
+ reshaped_v = reshape_layer(
+ x=values, shape=[0, 0, n_head, d_value], inplace=True)
+ v = transpose_layer(x=reshaped_v, perm=[0, 2, 1, 3])
+
+ if cache is not None: # only for faster inference
+ if static_kv: # For encoder-decoder attention in inference
+ cache_k, cache_v = cache["static_k"], cache["static_v"]
+ # To init the static_k and static_v in cache.
+ # Maybe we can use condition_op(if_else) to do these at the first
+ # step in while loop to replace these, however it might be less
+ # efficient.
+ static_cache_init = wrap_layer_with_block(
+ layers.assign,
+ fluid.default_main_program().current_block().parent_idx)
+ static_cache_init(k, cache_k)
+ static_cache_init(v, cache_v)
+ else: # For decoder self-attention in inference
+ cache_k, cache_v = cache["k"], cache["v"]
+ # gather cell states corresponding to selected parent
+ select_k = layers.gather(cache_k, index=gather_idx)
+ select_v = layers.gather(cache_v, index=gather_idx)
+ if not static_kv:
+ # For self attention in inference, use cache and concat time steps.
+ select_k = layers.concat([select_k, k], axis=2)
+ select_v = layers.concat([select_v, v], axis=2)
+ # update cell states(caches) cached in global block
+ layers.assign(select_k, cache_k)
+ layers.assign(select_v, cache_v)
+ return q, select_k, select_v
+ return q, k, v
+
+ def __combine_heads(x):
+ """
+ Transpose and then reshape the last two dimensions of inpunt tensor x
+ so that it becomes one dimension, which is reverse to __split_heads.
+ """
+ if len(x.shape) != 4:
+ raise ValueError("Input(x) should be a 4-D Tensor.")
+
+ trans_x = layers.transpose(x, perm=[0, 2, 1, 3])
+ # The value 0 in shape attr means copying the corresponding dimension
+ # size of the input as the output dimension size.
+ return layers.reshape(
+ x=trans_x,
+ shape=[0, 0, trans_x.shape[2] * trans_x.shape[3]],
+ inplace=True)
+
+ def scaled_dot_product_attention(q, k, v, attn_bias, d_key, dropout_rate):
+ """
+ Scaled Dot-Product Attention
+ """
+ # print(q)
+ # print(k)
+
+ product = layers.matmul(x=q, y=k, transpose_y=True, alpha=d_key**-0.5)
+ if attn_bias:
+ product += attn_bias
+ weights = layers.softmax(product)
+ if dropout_rate:
+ weights = layers.dropout(
+ weights,
+ dropout_prob=dropout_rate,
+ seed=dropout_seed,
+ is_test=False)
+ out = layers.matmul(weights, v)
+ return out
+
+ q, k, v = __compute_qkv(queries, keys, values, n_head, d_key, d_value)
+ q, k, v = __split_heads_qkv(q, k, v, n_head, d_key, d_value)
+
+ ctx_multiheads = scaled_dot_product_attention(q, k, v, attn_bias, d_model,
+ dropout_rate)
+
+ out = __combine_heads(ctx_multiheads)
+
+ # Project back to the model size.
+ proj_out = layers.fc(input=out,
+ size=d_model,
+ bias_attr=False,
+ num_flatten_dims=2)
+ return proj_out
+
+
+def positionwise_feed_forward(x, d_inner_hid, d_hid, dropout_rate):
+ """
+ Position-wise Feed-Forward Networks.
+ This module consists of two linear transformations with a ReLU activation
+ in between, which is applied to each position separately and identically.
+ """
+ hidden = layers.fc(input=x,
+ size=d_inner_hid,
+ num_flatten_dims=2,
+ act="relu")
+ if dropout_rate:
+ hidden = layers.dropout(
+ hidden, dropout_prob=dropout_rate, seed=dropout_seed, is_test=False)
+ out = layers.fc(input=hidden, size=d_hid, num_flatten_dims=2)
+ return out
+
+
+def pre_post_process_layer(prev_out, out, process_cmd, dropout_rate=0.):
+ """
+ Add residual connection, layer normalization and droput to the out tensor
+ optionally according to the value of process_cmd.
+ This will be used before or after multi-head attention and position-wise
+ feed-forward networks.
+ """
+ for cmd in process_cmd:
+ if cmd == "a": # add residual connection
+ out = out + prev_out if prev_out else out
+ elif cmd == "n": # add layer normalization
+ out = layers.layer_norm(
+ out,
+ begin_norm_axis=len(out.shape) - 1,
+ param_attr=fluid.initializer.Constant(1.),
+ bias_attr=fluid.initializer.Constant(0.))
+ elif cmd == "d": # add dropout
+ if dropout_rate:
+ out = layers.dropout(
+ out,
+ dropout_prob=dropout_rate,
+ seed=dropout_seed,
+ is_test=False)
+ return out
+
+
+pre_process_layer = partial(pre_post_process_layer, None)
+post_process_layer = pre_post_process_layer
+
+
+def prepare_encoder(
+ src_word, #[b,t,c]
+ src_pos,
+ src_vocab_size,
+ src_emb_dim,
+ src_max_len,
+ dropout_rate=0.,
+ bos_idx=0,
+ word_emb_param_name=None,
+ pos_enc_param_name=None):
+ """Add word embeddings and position encodings.
+ The output tensor has a shape of:
+ [batch_size, max_src_length_in_batch, d_model].
+ This module is used at the bottom of the encoder stacks.
+ """
+
+ src_word_emb = src_word #layers.concat(res,axis=1)
+ src_word_emb = layers.cast(src_word_emb, 'float32')
+ # print("src_word_emb",src_word_emb)
+
+ src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5)
+ src_pos_enc = layers.embedding(
+ src_pos,
+ size=[src_max_len, src_emb_dim],
+ param_attr=fluid.ParamAttr(
+ name=pos_enc_param_name, trainable=False))
+ src_pos_enc.stop_gradient = True
+ enc_input = src_word_emb + src_pos_enc
+ return layers.dropout(
+ enc_input, dropout_prob=dropout_rate, seed=dropout_seed,
+ is_test=False) if dropout_rate else enc_input
+
+
+def prepare_decoder(src_word,
+ src_pos,
+ src_vocab_size,
+ src_emb_dim,
+ src_max_len,
+ dropout_rate=0.,
+ bos_idx=0,
+ word_emb_param_name=None,
+ pos_enc_param_name=None):
+ """Add word embeddings and position encodings.
+ The output tensor has a shape of:
+ [batch_size, max_src_length_in_batch, d_model].
+ This module is used at the bottom of the encoder stacks.
+ """
+ src_word_emb = layers.embedding(
+ src_word,
+ size=[src_vocab_size, src_emb_dim],
+ padding_idx=bos_idx, # set embedding of bos to 0
+ param_attr=fluid.ParamAttr(
+ name=word_emb_param_name,
+ initializer=fluid.initializer.Normal(0., src_emb_dim**-0.5)))
+ # print("target_word_emb",src_word_emb)
+ src_word_emb = layers.scale(x=src_word_emb, scale=src_emb_dim**0.5)
+ src_pos_enc = layers.embedding(
+ src_pos,
+ size=[src_max_len, src_emb_dim],
+ param_attr=fluid.ParamAttr(
+ name=pos_enc_param_name, trainable=False))
+ src_pos_enc.stop_gradient = True
+ enc_input = src_word_emb + src_pos_enc
+ return layers.dropout(
+ enc_input, dropout_prob=dropout_rate, seed=dropout_seed,
+ is_test=False) if dropout_rate else enc_input
+
+
+# prepare_encoder = partial(
+# prepare_encoder_decoder, pos_enc_param_name=pos_enc_param_names[0])
+# prepare_decoder = partial(
+# prepare_encoder_decoder, pos_enc_param_name=pos_enc_param_names[1])
+
+
+def encoder_layer(enc_input,
+ attn_bias,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd="n",
+ postprocess_cmd="da"):
+ """The encoder layers that can be stacked to form a deep encoder.
+ This module consits of a multi-head (self) attention followed by
+ position-wise feed-forward networks and both the two components companied
+ with the post_process_layer to add residual connection, layer normalization
+ and droput.
+ """
+ attn_output = multi_head_attention(
+ pre_process_layer(enc_input, preprocess_cmd,
+ prepostprocess_dropout), None, None, attn_bias, d_key,
+ d_value, d_model, n_head, attention_dropout)
+ attn_output = post_process_layer(enc_input, attn_output, postprocess_cmd,
+ prepostprocess_dropout)
+ ffd_output = positionwise_feed_forward(
+ pre_process_layer(attn_output, preprocess_cmd, prepostprocess_dropout),
+ d_inner_hid, d_model, relu_dropout)
+ return post_process_layer(attn_output, ffd_output, postprocess_cmd,
+ prepostprocess_dropout)
+
+
+def encoder(enc_input,
+ attn_bias,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd="n",
+ postprocess_cmd="da"):
+ """
+ The encoder is composed of a stack of identical layers returned by calling
+ encoder_layer.
+ """
+ for i in range(n_layer):
+ enc_output = encoder_layer(
+ enc_input,
+ attn_bias,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd, )
+ enc_input = enc_output
+ enc_output = pre_process_layer(enc_output, preprocess_cmd,
+ prepostprocess_dropout)
+ return enc_output
+
+
+def decoder_layer(dec_input,
+ enc_output,
+ slf_attn_bias,
+ dec_enc_attn_bias,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ cache=None,
+ gather_idx=None):
+ """ The layer to be stacked in decoder part.
+ The structure of this module is similar to that in the encoder part except
+ a multi-head attention is added to implement encoder-decoder attention.
+ """
+ slf_attn_output = multi_head_attention(
+ pre_process_layer(dec_input, preprocess_cmd, prepostprocess_dropout),
+ None,
+ None,
+ slf_attn_bias,
+ d_key,
+ d_value,
+ d_model,
+ n_head,
+ attention_dropout,
+ cache=cache,
+ gather_idx=gather_idx)
+ slf_attn_output = post_process_layer(
+ dec_input,
+ slf_attn_output,
+ postprocess_cmd,
+ prepostprocess_dropout, )
+ enc_attn_output = multi_head_attention(
+ pre_process_layer(slf_attn_output, preprocess_cmd,
+ prepostprocess_dropout),
+ enc_output,
+ enc_output,
+ dec_enc_attn_bias,
+ d_key,
+ d_value,
+ d_model,
+ n_head,
+ attention_dropout,
+ cache=cache,
+ gather_idx=gather_idx,
+ static_kv=True)
+ enc_attn_output = post_process_layer(
+ slf_attn_output,
+ enc_attn_output,
+ postprocess_cmd,
+ prepostprocess_dropout, )
+ ffd_output = positionwise_feed_forward(
+ pre_process_layer(enc_attn_output, preprocess_cmd,
+ prepostprocess_dropout),
+ d_inner_hid,
+ d_model,
+ relu_dropout, )
+ dec_output = post_process_layer(
+ enc_attn_output,
+ ffd_output,
+ postprocess_cmd,
+ prepostprocess_dropout, )
+ return dec_output
+
+
+def decoder(dec_input,
+ enc_output,
+ dec_slf_attn_bias,
+ dec_enc_attn_bias,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ caches=None,
+ gather_idx=None):
+ """
+ The decoder is composed of a stack of identical decoder_layer layers.
+ """
+ for i in range(n_layer):
+ dec_output = decoder_layer(
+ dec_input,
+ enc_output,
+ dec_slf_attn_bias,
+ dec_enc_attn_bias,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ cache=None if caches is None else caches[i],
+ gather_idx=gather_idx)
+ dec_input = dec_output
+ dec_output = pre_process_layer(dec_output, preprocess_cmd,
+ prepostprocess_dropout)
+ return dec_output
+
+
+def make_all_inputs(input_fields):
+ """
+ Define the input data layers for the transformer model.
+ """
+ inputs = []
+ for input_field in input_fields:
+ input_var = layers.data(
+ name=input_field,
+ shape=input_descs[input_field][0],
+ dtype=input_descs[input_field][1],
+ lod_level=input_descs[input_field][2]
+ if len(input_descs[input_field]) == 3 else 0,
+ append_batch_size=False)
+ inputs.append(input_var)
+ return inputs
+
+
+def make_all_py_reader_inputs(input_fields, is_test=False):
+ reader = layers.py_reader(
+ capacity=20,
+ name="test_reader" if is_test else "train_reader",
+ shapes=[input_descs[input_field][0] for input_field in input_fields],
+ dtypes=[input_descs[input_field][1] for input_field in input_fields],
+ lod_levels=[
+ input_descs[input_field][2]
+ if len(input_descs[input_field]) == 3 else 0
+ for input_field in input_fields
+ ])
+ return layers.read_file(reader), reader
+
+
+def transformer(src_vocab_size,
+ trg_vocab_size,
+ max_length,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ label_smooth_eps,
+ bos_idx=0,
+ use_py_reader=False,
+ is_test=False):
+ if weight_sharing:
+ assert src_vocab_size == trg_vocab_size, (
+ "Vocabularies in source and target should be same for weight sharing."
+ )
+
+ data_input_names = encoder_data_input_fields + \
+ decoder_data_input_fields[:-1] + label_data_input_fields
+
+ if use_py_reader:
+ all_inputs, reader = make_all_py_reader_inputs(data_input_names,
+ is_test)
+ else:
+ all_inputs = make_all_inputs(data_input_names)
+ # print("all inputs",all_inputs)
+ enc_inputs_len = len(encoder_data_input_fields)
+ dec_inputs_len = len(decoder_data_input_fields[:-1])
+ enc_inputs = all_inputs[0:enc_inputs_len]
+ dec_inputs = all_inputs[enc_inputs_len:enc_inputs_len + dec_inputs_len]
+ label = all_inputs[-2]
+ weights = all_inputs[-1]
+
+ enc_output = wrap_encoder(
+ src_vocab_size, 64, n_layer, n_head, d_key, d_value, d_model,
+ d_inner_hid, prepostprocess_dropout, attention_dropout, relu_dropout,
+ preprocess_cmd, postprocess_cmd, weight_sharing, enc_inputs)
+
+ predict = wrap_decoder(
+ trg_vocab_size,
+ max_length,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ dec_inputs,
+ enc_output, )
+
+ # Padding index do not contribute to the total loss. The weights is used to
+ # cancel padding index in calculating the loss.
+ if label_smooth_eps:
+ label = layers.label_smooth(
+ label=layers.one_hot(
+ input=label, depth=trg_vocab_size),
+ epsilon=label_smooth_eps)
+
+ cost = layers.softmax_with_cross_entropy(
+ logits=predict,
+ label=label,
+ soft_label=True if label_smooth_eps else False)
+ weighted_cost = cost * weights
+ sum_cost = layers.reduce_sum(weighted_cost)
+ token_num = layers.reduce_sum(weights)
+ token_num.stop_gradient = True
+ avg_cost = sum_cost / token_num
+ return sum_cost, avg_cost, predict, token_num, reader if use_py_reader else None
+
+
+def wrap_encoder_forFeature(src_vocab_size,
+ max_length,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ enc_inputs=None,
+ bos_idx=0):
+ """
+ The wrapper assembles together all needed layers for the encoder.
+ img, src_pos, src_slf_attn_bias = enc_inputs
+ img
+ """
+
+ if enc_inputs is None:
+ # This is used to implement independent encoder program in inference.
+ conv_features, src_pos, src_slf_attn_bias = make_all_inputs(
+ encoder_data_input_fields)
+ else:
+ conv_features, src_pos, src_slf_attn_bias = enc_inputs #
+ b, t, c = conv_features.shape
+ #"""
+ # insert cnn
+ #"""
+ #import basemodel
+ # feat = basemodel.resnet_50(img)
+
+ # mycrnn = basemodel.CRNN()
+ # feat = mycrnn.ocr_convs(img,use_cudnn=TrainTaskConfig.use_gpu)
+ # b, c, w, h = feat.shape
+ # src_word = layers.reshape(feat, shape=[-1, c, w * h])
+
+ #myconv8 = basemodel.conv8()
+ #feat = myconv8.net(img )
+ #b , c, h, w = feat.shape#h=6
+ #print(feat)
+ #layers.Print(feat,message="conv_feat",summarize=10)
+
+ #feat =layers.conv2d(feat,c,filter_size =[4 , 1],act="relu")
+ #feat = layers.pool2d(feat,pool_stride=(3,1),pool_size=(3,1))
+ #src_word = layers.squeeze(feat,axes=[2]) #src_word [-1,c,ww]
+
+ #feat = layers.transpose(feat, [0,3,1,2])
+ #src_word = layers.reshape(feat,[-1,w, c*h])
+ #src_word = layers.im2sequence(
+ # input=feat,
+ # stride=[1, 1],
+ # filter_size=[feat.shape[2], 1])
+ #layers.Print(src_word,message="src_word",summarize=10)
+
+ # print('feat',feat)
+ #print("src_word",src_word)
+
+ enc_input = prepare_encoder(
+ conv_features,
+ src_pos,
+ src_vocab_size,
+ d_model,
+ max_length,
+ prepostprocess_dropout,
+ bos_idx=bos_idx,
+ word_emb_param_name="src_word_emb_table")
+
+ enc_output = encoder(
+ enc_input,
+ src_slf_attn_bias,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd, )
+ return enc_output
+
+
+def wrap_encoder(src_vocab_size,
+ max_length,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ enc_inputs=None,
+ bos_idx=0):
+ """
+ The wrapper assembles together all needed layers for the encoder.
+ img, src_pos, src_slf_attn_bias = enc_inputs
+ img
+ """
+ if enc_inputs is None:
+ # This is used to implement independent encoder program in inference.
+ src_word, src_pos, src_slf_attn_bias = make_all_inputs(
+ encoder_data_input_fields)
+ else:
+ src_word, src_pos, src_slf_attn_bias = enc_inputs #
+ #"""
+ # insert cnn
+ #"""
+ #import basemodel
+ # feat = basemodel.resnet_50(img)
+
+ # mycrnn = basemodel.CRNN()
+ # feat = mycrnn.ocr_convs(img,use_cudnn=TrainTaskConfig.use_gpu)
+ # b, c, w, h = feat.shape
+ # src_word = layers.reshape(feat, shape=[-1, c, w * h])
+
+ #myconv8 = basemodel.conv8()
+ #feat = myconv8.net(img )
+ #b , c, h, w = feat.shape#h=6
+ #print(feat)
+ #layers.Print(feat,message="conv_feat",summarize=10)
+
+ #feat =layers.conv2d(feat,c,filter_size =[4 , 1],act="relu")
+ #feat = layers.pool2d(feat,pool_stride=(3,1),pool_size=(3,1))
+ #src_word = layers.squeeze(feat,axes=[2]) #src_word [-1,c,ww]
+
+ #feat = layers.transpose(feat, [0,3,1,2])
+ #src_word = layers.reshape(feat,[-1,w, c*h])
+ #src_word = layers.im2sequence(
+ # input=feat,
+ # stride=[1, 1],
+ # filter_size=[feat.shape[2], 1])
+ #layers.Print(src_word,message="src_word",summarize=10)
+
+ # print('feat',feat)
+ #print("src_word",src_word)
+ enc_input = prepare_decoder(
+ src_word,
+ src_pos,
+ src_vocab_size,
+ d_model,
+ max_length,
+ prepostprocess_dropout,
+ bos_idx=bos_idx,
+ word_emb_param_name="src_word_emb_table")
+
+ enc_output = encoder(
+ enc_input,
+ src_slf_attn_bias,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd, )
+ return enc_output
+
+
+def wrap_decoder(trg_vocab_size,
+ max_length,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ dec_inputs=None,
+ enc_output=None,
+ caches=None,
+ gather_idx=None,
+ bos_idx=0):
+ """
+ The wrapper assembles together all needed layers for the decoder.
+ """
+ if dec_inputs is None:
+ # This is used to implement independent decoder program in inference.
+ trg_word, trg_pos, trg_slf_attn_bias, trg_src_attn_bias, enc_output = \
+ make_all_inputs(decoder_data_input_fields)
+ else:
+ trg_word, trg_pos, trg_slf_attn_bias, trg_src_attn_bias = dec_inputs
+
+ dec_input = prepare_decoder(
+ trg_word,
+ trg_pos,
+ trg_vocab_size,
+ d_model,
+ max_length,
+ prepostprocess_dropout,
+ bos_idx=bos_idx,
+ word_emb_param_name="src_word_emb_table"
+ if weight_sharing else "trg_word_emb_table")
+ dec_output = decoder(
+ dec_input,
+ enc_output,
+ trg_slf_attn_bias,
+ trg_src_attn_bias,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ caches=caches,
+ gather_idx=gather_idx)
+ return dec_output
+ # Reshape to 2D tensor to use GEMM instead of BatchedGEMM
+ dec_output = layers.reshape(
+ dec_output, shape=[-1, dec_output.shape[-1]], inplace=True)
+ if weight_sharing:
+ predict = layers.matmul(
+ x=dec_output,
+ y=fluid.default_main_program().global_block().var(
+ "trg_word_emb_table"),
+ transpose_y=True)
+ else:
+ predict = layers.fc(input=dec_output,
+ size=trg_vocab_size,
+ bias_attr=False)
+ if dec_inputs is None:
+ # Return probs for independent decoder program.
+ predict = layers.softmax(predict)
+ return predict
+
+
+def fast_decode(src_vocab_size,
+ trg_vocab_size,
+ max_in_len,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ beam_size,
+ max_out_len,
+ bos_idx,
+ eos_idx,
+ use_py_reader=False):
+ """
+ Use beam search to decode. Caches will be used to store states of history
+ steps which can make the decoding faster.
+ """
+ data_input_names = encoder_data_input_fields + fast_decoder_data_input_fields
+
+ if use_py_reader:
+ all_inputs, reader = make_all_py_reader_inputs(data_input_names)
+ else:
+ all_inputs = make_all_inputs(data_input_names)
+
+ enc_inputs_len = len(encoder_data_input_fields)
+ dec_inputs_len = len(fast_decoder_data_input_fields)
+ enc_inputs = all_inputs[0:enc_inputs_len] #enc_inputs tensor
+ dec_inputs = all_inputs[enc_inputs_len:enc_inputs_len +
+ dec_inputs_len] #dec_inputs tensor
+
+ enc_output = wrap_encoder(
+ src_vocab_size,
+ 64, ##to do !!!!!????
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ enc_inputs,
+ bos_idx=bos_idx)
+ start_tokens, init_scores, parent_idx, trg_src_attn_bias = dec_inputs
+
+ def beam_search():
+ max_len = layers.fill_constant(
+ shape=[1],
+ dtype=start_tokens.dtype,
+ value=max_out_len,
+ force_cpu=True)
+ step_idx = layers.fill_constant(
+ shape=[1], dtype=start_tokens.dtype, value=0, force_cpu=True)
+ cond = layers.less_than(x=step_idx, y=max_len) # default force_cpu=True
+ while_op = layers.While(cond)
+ # array states will be stored for each step.
+ ids = layers.array_write(
+ layers.reshape(start_tokens, (-1, 1)), step_idx)
+ scores = layers.array_write(init_scores, step_idx)
+ # cell states will be overwrited at each step.
+ # caches contains states of history steps in decoder self-attention
+ # and static encoder output projections in encoder-decoder attention
+ # to reduce redundant computation.
+ caches = [
+ {
+ "k": # for self attention
+ layers.fill_constant_batch_size_like(
+ input=start_tokens,
+ shape=[-1, n_head, 0, d_key],
+ dtype=enc_output.dtype,
+ value=0),
+ "v": # for self attention
+ layers.fill_constant_batch_size_like(
+ input=start_tokens,
+ shape=[-1, n_head, 0, d_value],
+ dtype=enc_output.dtype,
+ value=0),
+ "static_k": # for encoder-decoder attention
+ layers.create_tensor(dtype=enc_output.dtype),
+ "static_v": # for encoder-decoder attention
+ layers.create_tensor(dtype=enc_output.dtype)
+ } for i in range(n_layer)
+ ]
+
+ with while_op.block():
+ pre_ids = layers.array_read(array=ids, i=step_idx)
+ # Since beam_search_op dosen't enforce pre_ids' shape, we can do
+ # inplace reshape here which actually change the shape of pre_ids.
+ pre_ids = layers.reshape(pre_ids, (-1, 1, 1), inplace=True)
+ pre_scores = layers.array_read(array=scores, i=step_idx)
+ # gather cell states corresponding to selected parent
+ pre_src_attn_bias = layers.gather(
+ trg_src_attn_bias, index=parent_idx)
+ pre_pos = layers.elementwise_mul(
+ x=layers.fill_constant_batch_size_like(
+ input=pre_src_attn_bias, # cann't use lod tensor here
+ value=1,
+ shape=[-1, 1, 1],
+ dtype=pre_ids.dtype),
+ y=step_idx,
+ axis=0)
+ logits = wrap_decoder(
+ trg_vocab_size,
+ max_in_len,
+ n_layer,
+ n_head,
+ d_key,
+ d_value,
+ d_model,
+ d_inner_hid,
+ prepostprocess_dropout,
+ attention_dropout,
+ relu_dropout,
+ preprocess_cmd,
+ postprocess_cmd,
+ weight_sharing,
+ dec_inputs=(pre_ids, pre_pos, None, pre_src_attn_bias),
+ enc_output=enc_output,
+ caches=caches,
+ gather_idx=parent_idx,
+ bos_idx=bos_idx)
+ # intra-beam topK
+ topk_scores, topk_indices = layers.topk(
+ input=layers.softmax(logits), k=beam_size)
+ accu_scores = layers.elementwise_add(
+ x=layers.log(topk_scores), y=pre_scores, axis=0)
+ # beam_search op uses lod to differentiate branches.
+ accu_scores = layers.lod_reset(accu_scores, pre_ids)
+ # topK reduction across beams, also contain special handle of
+ # end beams and end sentences(batch reduction)
+ selected_ids, selected_scores, gather_idx = layers.beam_search(
+ pre_ids=pre_ids,
+ pre_scores=pre_scores,
+ ids=topk_indices,
+ scores=accu_scores,
+ beam_size=beam_size,
+ end_id=eos_idx,
+ return_parent_idx=True)
+ layers.increment(x=step_idx, value=1.0, in_place=True)
+ # cell states(caches) have been updated in wrap_decoder,
+ # only need to update beam search states here.
+ layers.array_write(selected_ids, i=step_idx, array=ids)
+ layers.array_write(selected_scores, i=step_idx, array=scores)
+ layers.assign(gather_idx, parent_idx)
+ layers.assign(pre_src_attn_bias, trg_src_attn_bias)
+ length_cond = layers.less_than(x=step_idx, y=max_len)
+ finish_cond = layers.logical_not(layers.is_empty(x=selected_ids))
+ layers.logical_and(x=length_cond, y=finish_cond, out=cond)
+
+ finished_ids, finished_scores = layers.beam_search_decode(
+ ids, scores, beam_size=beam_size, end_id=eos_idx)
+ return finished_ids, finished_scores
+
+ finished_ids, finished_scores = beam_search()
+ return finished_ids, finished_scores, reader if use_py_reader else None
diff --git a/ppocr/modeling/losses/det_sast_loss.py b/ppocr/modeling/losses/det_sast_loss.py
new file mode 100644
index 0000000000000000000000000000000000000000..fb1a545af9d67b61878b3fb476fdc81746e8b992
--- /dev/null
+++ b/ppocr/modeling/losses/det_sast_loss.py
@@ -0,0 +1,115 @@
+#copyright (c) 2020 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import paddle.fluid as fluid
+
+
+class SASTLoss(object):
+ """
+ SAST Loss function
+ """
+
+ def __init__(self, params=None):
+ super(SASTLoss, self).__init__()
+
+ def __call__(self, predicts, labels):
+ """
+ tcl_pos: N x 128 x 3
+ tcl_mask: N x 128 x 1
+ tcl_label: N x X list or LoDTensor
+ """
+
+ f_score = predicts['f_score']
+ f_border = predicts['f_border']
+ f_tvo = predicts['f_tvo']
+ f_tco = predicts['f_tco']
+
+ l_score = labels['input_score']
+ l_border = labels['input_border']
+ l_mask = labels['input_mask']
+ l_tvo = labels['input_tvo']
+ l_tco = labels['input_tco']
+
+ #score_loss
+ intersection = fluid.layers.reduce_sum(f_score * l_score * l_mask)
+ union = fluid.layers.reduce_sum(f_score * l_mask) + fluid.layers.reduce_sum(l_score * l_mask)
+ score_loss = 1.0 - 2 * intersection / (union + 1e-5)
+
+ #border loss
+ l_border_split, l_border_norm = fluid.layers.split(l_border, num_or_sections=[4, 1], dim=1)
+ f_border_split = f_border
+ l_border_norm_split = fluid.layers.expand(x=l_border_norm, expand_times=[1, 4, 1, 1])
+ l_border_score = fluid.layers.expand(x=l_score, expand_times=[1, 4, 1, 1])
+ l_border_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 4, 1, 1])
+ border_diff = l_border_split - f_border_split
+ abs_border_diff = fluid.layers.abs(border_diff)
+ border_sign = abs_border_diff < 1.0
+ border_sign = fluid.layers.cast(border_sign, dtype='float32')
+ border_sign.stop_gradient = True
+ border_in_loss = 0.5 * abs_border_diff * abs_border_diff * border_sign + \
+ (abs_border_diff - 0.5) * (1.0 - border_sign)
+ border_out_loss = l_border_norm_split * border_in_loss
+ border_loss = fluid.layers.reduce_sum(border_out_loss * l_border_score * l_border_mask) / \
+ (fluid.layers.reduce_sum(l_border_score * l_border_mask) + 1e-5)
+
+ #tvo_loss
+ l_tvo_split, l_tvo_norm = fluid.layers.split(l_tvo, num_or_sections=[8, 1], dim=1)
+ f_tvo_split = f_tvo
+ l_tvo_norm_split = fluid.layers.expand(x=l_tvo_norm, expand_times=[1, 8, 1, 1])
+ l_tvo_score = fluid.layers.expand(x=l_score, expand_times=[1, 8, 1, 1])
+ l_tvo_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 8, 1, 1])
+ #
+ tvo_geo_diff = l_tvo_split - f_tvo_split
+ abs_tvo_geo_diff = fluid.layers.abs(tvo_geo_diff)
+ tvo_sign = abs_tvo_geo_diff < 1.0
+ tvo_sign = fluid.layers.cast(tvo_sign, dtype='float32')
+ tvo_sign.stop_gradient = True
+ tvo_in_loss = 0.5 * abs_tvo_geo_diff * abs_tvo_geo_diff * tvo_sign + \
+ (abs_tvo_geo_diff - 0.5) * (1.0 - tvo_sign)
+ tvo_out_loss = l_tvo_norm_split * tvo_in_loss
+ tvo_loss = fluid.layers.reduce_sum(tvo_out_loss * l_tvo_score * l_tvo_mask) / \
+ (fluid.layers.reduce_sum(l_tvo_score * l_tvo_mask) + 1e-5)
+
+ #tco_loss
+ l_tco_split, l_tco_norm = fluid.layers.split(l_tco, num_or_sections=[2, 1], dim=1)
+ f_tco_split = f_tco
+ l_tco_norm_split = fluid.layers.expand(x=l_tco_norm, expand_times=[1, 2, 1, 1])
+ l_tco_score = fluid.layers.expand(x=l_score, expand_times=[1, 2, 1, 1])
+ l_tco_mask = fluid.layers.expand(x=l_mask, expand_times=[1, 2, 1, 1])
+ #
+ tco_geo_diff = l_tco_split - f_tco_split
+ abs_tco_geo_diff = fluid.layers.abs(tco_geo_diff)
+ tco_sign = abs_tco_geo_diff < 1.0
+ tco_sign = fluid.layers.cast(tco_sign, dtype='float32')
+ tco_sign.stop_gradient = True
+ tco_in_loss = 0.5 * abs_tco_geo_diff * abs_tco_geo_diff * tco_sign + \
+ (abs_tco_geo_diff - 0.5) * (1.0 - tco_sign)
+ tco_out_loss = l_tco_norm_split * tco_in_loss
+ tco_loss = fluid.layers.reduce_sum(tco_out_loss * l_tco_score * l_tco_mask) / \
+ (fluid.layers.reduce_sum(l_tco_score * l_tco_mask) + 1e-5)
+
+
+ # total loss
+ tvo_lw, tco_lw = 1.5, 1.5
+ score_lw, border_lw = 1.0, 1.0
+ total_loss = score_loss * score_lw + border_loss * border_lw + \
+ tvo_loss * tvo_lw + tco_loss * tco_lw
+
+ losses = {'total_loss':total_loss, "score_loss":score_loss,\
+ "border_loss":border_loss, 'tvo_loss':tvo_loss, 'tco_loss':tco_loss}
+ return losses
\ No newline at end of file
diff --git a/ppocr/modeling/losses/rec_srn_loss.py b/ppocr/modeling/losses/rec_srn_loss.py
new file mode 100755
index 0000000000000000000000000000000000000000..b1ebd86fadfc48831ca3d03e79ef65f4be2aa5e8
--- /dev/null
+++ b/ppocr/modeling/losses/rec_srn_loss.py
@@ -0,0 +1,55 @@
+#copyright (c) 2019 PaddlePaddle Authors. All Rights Reserve.
+#
+#Licensed under the Apache License, Version 2.0 (the "License");
+#you may not use this file except in compliance with the License.
+#You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+#Unless required by applicable law or agreed to in writing, software
+#distributed under the License is distributed on an "AS IS" BASIS,
+#WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+#See the License for the specific language governing permissions and
+#limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import math
+
+import paddle
+import paddle.fluid as fluid
+
+
+class SRNLoss(object):
+ def __init__(self, params):
+ super(SRNLoss, self).__init__()
+ self.char_num = params['char_num']
+
+ def __call__(self, predicts, others):
+ predict = predicts['predict']
+ word_predict = predicts['word_out']
+ gsrm_predict = predicts['gsrm_out']
+ label = others['label']
+ lbl_weight = others['lbl_weight']
+
+ casted_label = fluid.layers.cast(x=label, dtype='int64')
+ cost_word = fluid.layers.cross_entropy(
+ input=word_predict, label=casted_label)
+ cost_gsrm = fluid.layers.cross_entropy(
+ input=gsrm_predict, label=casted_label)
+ cost_vsfd = fluid.layers.cross_entropy(
+ input=predict, label=casted_label)
+
+ cost_word = fluid.layers.reshape(
+ x=fluid.layers.reduce_sum(cost_word), shape=[1])
+ cost_gsrm = fluid.layers.reshape(
+ x=fluid.layers.reduce_sum(cost_gsrm), shape=[1])
+ cost_vsfd = fluid.layers.reshape(
+ x=fluid.layers.reduce_sum(cost_vsfd), shape=[1])
+
+ sum_cost = fluid.layers.sum(
+ [cost_word, cost_vsfd * 2.0, cost_gsrm * 0.15])
+
+ return [sum_cost, cost_vsfd, cost_word]
diff --git a/ppocr/optimizer.py b/ppocr/optimizer.py
old mode 100755
new mode 100644
index f5f4958351239a413fa2cb19dec4a068f1bb44f4..fd315cd1319d4925e893705957a42f931a39076e
--- a/ppocr/optimizer.py
+++ b/ppocr/optimizer.py
@@ -14,14 +14,50 @@
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
+import math
import paddle.fluid as fluid
from paddle.fluid.regularizer import L2Decay
+from paddle.fluid.layers.learning_rate_scheduler import _decay_step_counter
+import paddle.fluid.layers.ops as ops
from ppocr.utils.utility import initial_logger
logger = initial_logger()
+def cosine_decay_with_warmup(learning_rate,
+ step_each_epoch,
+ epochs=500,
+ warmup_minibatch=1000):
+ """Applies cosine decay to the learning rate.
+ lr = 0.05 * (math.cos(epoch * (math.pi / 120)) + 1)
+ decrease lr for every mini-batch and start with warmup.
+ """
+ global_step = _decay_step_counter()
+ lr = fluid.layers.tensor.create_global_var(
+ shape=[1],
+ value=0.0,
+ dtype='float32',
+ persistable=True,
+ name="learning_rate")
+
+ warmup_minibatch = fluid.layers.fill_constant(
+ shape=[1],
+ dtype='float32',
+ value=float(warmup_minibatch),
+ force_cpu=True)
+
+ with fluid.layers.control_flow.Switch() as switch:
+ with switch.case(global_step < warmup_minibatch):
+ decayed_lr = learning_rate * (1.0 * global_step / warmup_minibatch)
+ fluid.layers.tensor.assign(input=decayed_lr, output=lr)
+ with switch.default():
+ decayed_lr = learning_rate * \
+ (ops.cos((global_step - warmup_minibatch) * (math.pi / (epochs * step_each_epoch))) + 1)/2
+ fluid.layers.tensor.assign(input=decayed_lr, output=lr)
+ return lr
+
+
def AdamDecay(params, parameter_list=None):
"""
define optimizer function
@@ -36,17 +72,39 @@ def AdamDecay(params, parameter_list=None):
l2_decay = params.get("l2_decay", 0.0)
if 'decay' in params:
+ supported_decay_mode = [
+ "cosine_decay", "cosine_decay_warmup", "piecewise_decay"
+ ]
params = params['decay']
decay_mode = params['function']
- step_each_epoch = params['step_each_epoch']
- total_epoch = params['total_epoch']
+ assert decay_mode in supported_decay_mode, "Supported decay mode is {}, but got {}".format(
+ supported_decay_mode, decay_mode)
+
if decay_mode == "cosine_decay":
+ step_each_epoch = params['step_each_epoch']
+ total_epoch = params['total_epoch']
base_lr = fluid.layers.cosine_decay(
learning_rate=base_lr,
step_each_epoch=step_each_epoch,
epochs=total_epoch)
- else:
- logger.info("Only support Cosine decay currently")
+ elif decay_mode == "cosine_decay_warmup":
+ step_each_epoch = params['step_each_epoch']
+ total_epoch = params['total_epoch']
+ warmup_minibatch = params.get("warmup_minibatch", 1000)
+ base_lr = cosine_decay_with_warmup(
+ learning_rate=base_lr,
+ step_each_epoch=step_each_epoch,
+ epochs=total_epoch,
+ warmup_minibatch=warmup_minibatch)
+ elif decay_mode == "piecewise_decay":
+ boundaries = params["boundaries"]
+ decay_rate = params["decay_rate"]
+ values = [
+ base_lr * decay_rate**idx
+ for idx in range(len(boundaries) + 1)
+ ]
+ base_lr = fluid.layers.piecewise_decay(boundaries, values)
+
optimizer = fluid.optimizer.Adam(
learning_rate=base_lr,
beta1=beta1,
@@ -54,3 +112,44 @@ def AdamDecay(params, parameter_list=None):
regularization=L2Decay(regularization_coeff=l2_decay),
parameter_list=parameter_list)
return optimizer
+
+
+def RMSProp(params, parameter_list=None):
+ """
+ define optimizer function
+ args:
+ params(dict): the super parameters
+ parameter_list (list): list of Variable names to update to minimize loss
+ return:
+ """
+ base_lr = params.get("base_lr", 0.001)
+ l2_decay = params.get("l2_decay", 0.00005)
+
+ if 'decay' in params:
+ supported_decay_mode = ["cosine_decay", "piecewise_decay"]
+ params = params['decay']
+ decay_mode = params['function']
+ assert decay_mode in supported_decay_mode, "Supported decay mode is {}, but got {}".format(
+ supported_decay_mode, decay_mode)
+
+ if decay_mode == "cosine_decay":
+ step_each_epoch = params['step_each_epoch']
+ total_epoch = params['total_epoch']
+ base_lr = fluid.layers.cosine_decay(
+ learning_rate=base_lr,
+ step_each_epoch=step_each_epoch,
+ epochs=total_epoch)
+ elif decay_mode == "piecewise_decay":
+ boundaries = params["boundaries"]
+ decay_rate = params["decay_rate"]
+ values = [
+ base_lr * decay_rate**idx
+ for idx in range(len(boundaries) + 1)
+ ]
+ base_lr = fluid.layers.piecewise_decay(boundaries, values)
+
+ optimizer = fluid.optimizer.RMSProp(
+ learning_rate=base_lr,
+ regularization=fluid.regularizer.L2Decay(regularization_coeff=l2_decay))
+
+ return optimizer
diff --git a/ppocr/postprocess/east_postprocess.py b/ppocr/postprocess/east_postprocess.py
index 8200df3c281383fbc8c3f8df4f5090d923fb4d73..270cf6699bb7f77c730c6ff80b49f1798b9bb720 100755
--- a/ppocr/postprocess/east_postprocess.py
+++ b/ppocr/postprocess/east_postprocess.py
@@ -22,9 +22,9 @@ import cv2
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
class EASTPostPocess(object):
diff --git a/ppocr/postprocess/lanms/.ycm_extra_conf.py b/ppocr/postprocess/lanms/.ycm_extra_conf.py
index 3c8673ddbbd92042660545d123c6bbba4f0d8273..cd1a74e920bad8d84b755b5dbfbf83e6884836d6 100644
--- a/ppocr/postprocess/lanms/.ycm_extra_conf.py
+++ b/ppocr/postprocess/lanms/.ycm_extra_conf.py
@@ -25,7 +25,7 @@ import ycm_core
# These are the compilation flags that will be used in case there's no
# compilation database set (by default, one is not set).
# CHANGE THIS LIST OF FLAGS. YES, THIS IS THE DROID YOU HAVE BEEN LOOKING FOR.
-sys.path.append(os.path.dirname(__file__))
+sys.path.append(os.path.dirname(os.path.abspath(__file__)))
BASE_DIR = os.path.dirname(os.path.realpath(__file__))
diff --git a/ppocr/postprocess/sast_postprocess.py b/ppocr/postprocess/sast_postprocess.py
new file mode 100644
index 0000000000000000000000000000000000000000..ddce65542c6d12244419a37db838e070401b5290
--- /dev/null
+++ b/ppocr/postprocess/sast_postprocess.py
@@ -0,0 +1,289 @@
+# Copyright (c) 2020 PaddlePaddle Authors. All Rights Reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+from __future__ import absolute_import
+from __future__ import division
+from __future__ import print_function
+
+import os
+import sys
+__dir__ = os.path.dirname(__file__)
+sys.path.append(__dir__)
+sys.path.append(os.path.join(__dir__, '..'))
+
+import numpy as np
+from .locality_aware_nms import nms_locality
+# import lanms
+import cv2
+import time
+
+
+class SASTPostProcess(object):
+ """
+ The post process for SAST.
+ """
+
+ def __init__(self, params):
+ self.score_thresh = params.get('score_thresh', 0.5)
+ self.nms_thresh = params.get('nms_thresh', 0.2)
+ self.sample_pts_num = params.get('sample_pts_num', 2)
+ self.shrink_ratio_of_width = params.get('shrink_ratio_of_width', 0.3)
+ self.expand_scale = params.get('expand_scale', 1.0)
+ self.tcl_map_thresh = 0.5
+
+ # c++ la-nms is faster, but only support python 3.5
+ self.is_python35 = False
+ if sys.version_info.major == 3 and sys.version_info.minor == 5:
+ self.is_python35 = True
+
+ def point_pair2poly(self, point_pair_list):
+ """
+ Transfer vertical point_pairs into poly point in clockwise.
+ """
+ # constract poly
+ point_num = len(point_pair_list) * 2
+ point_list = [0] * point_num
+ for idx, point_pair in enumerate(point_pair_list):
+ point_list[idx] = point_pair[0]
+ point_list[point_num - 1 - idx] = point_pair[1]
+ return np.array(point_list).reshape(-1, 2)
+
+ def shrink_quad_along_width(self, quad, begin_width_ratio=0., end_width_ratio=1.):
+ """
+ Generate shrink_quad_along_width.
+ """
+ ratio_pair = np.array([[begin_width_ratio], [end_width_ratio]], dtype=np.float32)
+ p0_1 = quad[0] + (quad[1] - quad[0]) * ratio_pair
+ p3_2 = quad[3] + (quad[2] - quad[3]) * ratio_pair
+ return np.array([p0_1[0], p0_1[1], p3_2[1], p3_2[0]])
+
+ def expand_poly_along_width(self, poly, shrink_ratio_of_width=0.3):
+ """
+ expand poly along width.
+ """
+ point_num = poly.shape[0]
+ left_quad = np.array([poly[0], poly[1], poly[-2], poly[-1]], dtype=np.float32)
+ left_ratio = -shrink_ratio_of_width * np.linalg.norm(left_quad[0] - left_quad[3]) / \
+ (np.linalg.norm(left_quad[0] - left_quad[1]) + 1e-6)
+ left_quad_expand = self.shrink_quad_along_width(left_quad, left_ratio, 1.0)
+ right_quad = np.array([poly[point_num // 2 - 2], poly[point_num // 2 - 1],
+ poly[point_num // 2], poly[point_num // 2 + 1]], dtype=np.float32)
+ right_ratio = 1.0 + \
+ shrink_ratio_of_width * np.linalg.norm(right_quad[0] - right_quad[3]) / \
+ (np.linalg.norm(right_quad[0] - right_quad[1]) + 1e-6)
+ right_quad_expand = self.shrink_quad_along_width(right_quad, 0.0, right_ratio)
+ poly[0] = left_quad_expand[0]
+ poly[-1] = left_quad_expand[-1]
+ poly[point_num // 2 - 1] = right_quad_expand[1]
+ poly[point_num // 2] = right_quad_expand[2]
+ return poly
+
+ def restore_quad(self, tcl_map, tcl_map_thresh, tvo_map):
+ """Restore quad."""
+ xy_text = np.argwhere(tcl_map[:, :, 0] > tcl_map_thresh)
+ xy_text = xy_text[:, ::-1] # (n, 2)
+
+ # Sort the text boxes via the y axis
+ xy_text = xy_text[np.argsort(xy_text[:, 1])]
+
+ scores = tcl_map[xy_text[:, 1], xy_text[:, 0], 0]
+ scores = scores[:, np.newaxis]
+
+ # Restore
+ point_num = int(tvo_map.shape[-1] / 2)
+ assert point_num == 4
+ tvo_map = tvo_map[xy_text[:, 1], xy_text[:, 0], :]
+ xy_text_tile = np.tile(xy_text, (1, point_num)) # (n, point_num * 2)
+ quads = xy_text_tile - tvo_map
+
+ return scores, quads, xy_text
+
+ def quad_area(self, quad):
+ """
+ compute area of a quad.
+ """
+ edge = [
+ (quad[1][0] - quad[0][0]) * (quad[1][1] + quad[0][1]),
+ (quad[2][0] - quad[1][0]) * (quad[2][1] + quad[1][1]),
+ (quad[3][0] - quad[2][0]) * (quad[3][1] + quad[2][1]),
+ (quad[0][0] - quad[3][0]) * (quad[0][1] + quad[3][1])
+ ]
+ return np.sum(edge) / 2.
+
+ def nms(self, dets):
+ if self.is_python35:
+ import lanms
+ dets = lanms.merge_quadrangle_n9(dets, self.nms_thresh)
+ else:
+ dets = nms_locality(dets, self.nms_thresh)
+ return dets
+
+ def cluster_by_quads_tco(self, tcl_map, tcl_map_thresh, quads, tco_map):
+ """
+ Cluster pixels in tcl_map based on quads.
+ """
+ instance_count = quads.shape[0] + 1 # contain background
+ instance_label_map = np.zeros(tcl_map.shape[:2], dtype=np.int32)
+ if instance_count == 1:
+ return instance_count, instance_label_map
+
+ # predict text center
+ xy_text = np.argwhere(tcl_map[:, :, 0] > tcl_map_thresh)
+ n = xy_text.shape[0]
+ xy_text = xy_text[:, ::-1] # (n, 2)
+ tco = tco_map[xy_text[:, 1], xy_text[:, 0], :] # (n, 2)
+ pred_tc = xy_text - tco
+
+ # get gt text center
+ m = quads.shape[0]
+ gt_tc = np.mean(quads, axis=1) # (m, 2)
+
+ pred_tc_tile = np.tile(pred_tc[:, np.newaxis, :], (1, m, 1)) # (n, m, 2)
+ gt_tc_tile = np.tile(gt_tc[np.newaxis, :, :], (n, 1, 1)) # (n, m, 2)
+ dist_mat = np.linalg.norm(pred_tc_tile - gt_tc_tile, axis=2) # (n, m)
+ xy_text_assign = np.argmin(dist_mat, axis=1) + 1 # (n,)
+
+ instance_label_map[xy_text[:, 1], xy_text[:, 0]] = xy_text_assign
+ return instance_count, instance_label_map
+
+ def estimate_sample_pts_num(self, quad, xy_text):
+ """
+ Estimate sample points number.
+ """
+ eh = (np.linalg.norm(quad[0] - quad[3]) + np.linalg.norm(quad[1] - quad[2])) / 2.0
+ ew = (np.linalg.norm(quad[0] - quad[1]) + np.linalg.norm(quad[2] - quad[3])) / 2.0
+
+ dense_sample_pts_num = max(2, int(ew))
+ dense_xy_center_line = xy_text[np.linspace(0, xy_text.shape[0] - 1, dense_sample_pts_num,
+ endpoint=True, dtype=np.float32).astype(np.int32)]
+
+ dense_xy_center_line_diff = dense_xy_center_line[1:] - dense_xy_center_line[:-1]
+ estimate_arc_len = np.sum(np.linalg.norm(dense_xy_center_line_diff, axis=1))
+
+ sample_pts_num = max(2, int(estimate_arc_len / eh))
+ return sample_pts_num
+
+ def detect_sast(self, tcl_map, tvo_map, tbo_map, tco_map, ratio_w, ratio_h, src_w, src_h,
+ shrink_ratio_of_width=0.3, tcl_map_thresh=0.5, offset_expand=1.0, out_strid=4.0):
+ """
+ first resize the tcl_map, tvo_map and tbo_map to the input_size, then restore the polys
+ """
+ # restore quad
+ scores, quads, xy_text = self.restore_quad(tcl_map, tcl_map_thresh, tvo_map)
+ dets = np.hstack((quads, scores)).astype(np.float32, copy=False)
+ dets = self.nms(dets)
+ if dets.shape[0] == 0:
+ return []
+ quads = dets[:, :-1].reshape(-1, 4, 2)
+
+ # Compute quad area
+ quad_areas = []
+ for quad in quads:
+ quad_areas.append(-self.quad_area(quad))
+
+ # instance segmentation
+ # instance_count, instance_label_map = cv2.connectedComponents(tcl_map.astype(np.uint8), connectivity=8)
+ instance_count, instance_label_map = self.cluster_by_quads_tco(tcl_map, tcl_map_thresh, quads, tco_map)
+
+ # restore single poly with tcl instance.
+ poly_list = []
+ for instance_idx in range(1, instance_count):
+ xy_text = np.argwhere(instance_label_map == instance_idx)[:, ::-1]
+ quad = quads[instance_idx - 1]
+ q_area = quad_areas[instance_idx - 1]
+ if q_area < 5:
+ continue
+
+ #
+ len1 = float(np.linalg.norm(quad[0] -quad[1]))
+ len2 = float(np.linalg.norm(quad[1] -quad[2]))
+ min_len = min(len1, len2)
+ if min_len < 3:
+ continue
+
+ # filter small CC
+ if xy_text.shape[0] <= 0:
+ continue
+
+ # filter low confidence instance
+ xy_text_scores = tcl_map[xy_text[:, 1], xy_text[:, 0], 0]
+ if np.sum(xy_text_scores) / quad_areas[instance_idx - 1] < 0.1:
+ # if np.sum(xy_text_scores) / quad_areas[instance_idx - 1] < 0.05:
+ continue
+
+ # sort xy_text
+ left_center_pt = np.array([[(quad[0, 0] + quad[-1, 0]) / 2.0,
+ (quad[0, 1] + quad[-1, 1]) / 2.0]]) # (1, 2)
+ right_center_pt = np.array([[(quad[1, 0] + quad[2, 0]) / 2.0,
+ (quad[1, 1] + quad[2, 1]) / 2.0]]) # (1, 2)
+ proj_unit_vec = (right_center_pt - left_center_pt) / \
+ (np.linalg.norm(right_center_pt - left_center_pt) + 1e-6)
+ proj_value = np.sum(xy_text * proj_unit_vec, axis=1)
+ xy_text = xy_text[np.argsort(proj_value)]
+
+ # Sample pts in tcl map
+ if self.sample_pts_num == 0:
+ sample_pts_num = self.estimate_sample_pts_num(quad, xy_text)
+ else:
+ sample_pts_num = self.sample_pts_num
+ xy_center_line = xy_text[np.linspace(0, xy_text.shape[0] - 1, sample_pts_num,
+ endpoint=True, dtype=np.float32).astype(np.int32)]
+
+ point_pair_list = []
+ for x, y in xy_center_line:
+ # get corresponding offset
+ offset = tbo_map[y, x, :].reshape(2, 2)
+ if offset_expand != 1.0:
+ offset_length = np.linalg.norm(offset, axis=1, keepdims=True)
+ expand_length = np.clip(offset_length * (offset_expand - 1), a_min=0.5, a_max=3.0)
+ offset_detal = offset / offset_length * expand_length
+ offset = offset + offset_detal
+ # original point
+ ori_yx = np.array([y, x], dtype=np.float32)
+ point_pair = (ori_yx + offset)[:, ::-1]* out_strid / np.array([ratio_w, ratio_h]).reshape(-1, 2)
+ point_pair_list.append(point_pair)
+
+ # ndarry: (x, 2), expand poly along width
+ detected_poly = self.point_pair2poly(point_pair_list)
+ detected_poly = self.expand_poly_along_width(detected_poly, shrink_ratio_of_width)
+ detected_poly[:, 0] = np.clip(detected_poly[:, 0], a_min=0, a_max=src_w)
+ detected_poly[:, 1] = np.clip(detected_poly[:, 1], a_min=0, a_max=src_h)
+ poly_list.append(detected_poly)
+
+ return poly_list
+
+ def __call__(self, outs_dict, ratio_list):
+ score_list = outs_dict['f_score']
+ border_list = outs_dict['f_border']
+ tvo_list = outs_dict['f_tvo']
+ tco_list = outs_dict['f_tco']
+
+ img_num = len(ratio_list)
+ poly_lists = []
+ for ino in range(img_num):
+ p_score = score_list[ino].transpose((1,2,0))
+ p_border = border_list[ino].transpose((1,2,0))
+ p_tvo = tvo_list[ino].transpose((1,2,0))
+ p_tco = tco_list[ino].transpose((1,2,0))
+ # print(p_score.shape, p_border.shape, p_tvo.shape, p_tco.shape)
+ ratio_h, ratio_w, src_h, src_w = ratio_list[ino]
+
+ poly_list = self.detect_sast(p_score, p_tvo, p_border, p_tco, ratio_w, ratio_h, src_w, src_h,
+ shrink_ratio_of_width=self.shrink_ratio_of_width,
+ tcl_map_thresh=self.tcl_map_thresh, offset_expand=self.expand_scale)
+
+ poly_lists.append(poly_list)
+
+ return poly_lists
+
diff --git a/ppocr/utils/character.py b/ppocr/utils/character.py
index 9a3db8dd92454c65256d1cadf7f155b6882ee171..c7c93fc557604a32d12343d929c119fd787ee126 100755
--- a/ppocr/utils/character.py
+++ b/ppocr/utils/character.py
@@ -25,6 +25,9 @@ class CharacterOps(object):
def __init__(self, config):
self.character_type = config['character_type']
self.loss_type = config['loss_type']
+ self.max_text_len = config['max_text_length']
+ if self.loss_type == "srn" and self.character_type != "en":
+ raise Exception("SRN can only support in character_type == en")
if self.character_type == "en":
self.character_str = "0123456789abcdefghijklmnopqrstuvwxyz"
dict_character = list(self.character_str)
@@ -54,6 +57,8 @@ class CharacterOps(object):
self.end_str = "eos"
if self.loss_type == "attention":
dict_character = [self.beg_str, self.end_str] + dict_character
+ elif self.loss_type == "srn":
+ dict_character = dict_character + [self.beg_str, self.end_str]
self.dict = {}
for i, char in enumerate(dict_character):
self.dict[char] = i
@@ -147,6 +152,39 @@ def cal_predicts_accuracy(char_ops,
return acc, acc_num, img_num
+def cal_predicts_accuracy_srn(char_ops,
+ preds,
+ labels,
+ max_text_len,
+ is_debug=False):
+ acc_num = 0
+ img_num = 0
+
+ total_len = preds.shape[0]
+ img_num = int(total_len / max_text_len)
+ for i in range(img_num):
+ cur_label = []
+ cur_pred = []
+ for j in range(max_text_len):
+ if labels[j + i * max_text_len] != 37: #0
+ cur_label.append(labels[j + i * max_text_len][0])
+ else:
+ break
+
+ for j in range(max_text_len + 1):
+ if j < len(cur_label) and preds[j + i * max_text_len][
+ 0] != cur_label[j]:
+ break
+ elif j == len(cur_label) and j == max_text_len:
+ acc_num += 1
+ break
+ elif j == len(cur_label) and preds[j + i * max_text_len][0] == 37:
+ acc_num += 1
+ break
+ acc = acc_num * 1.0 / img_num
+ return acc, acc_num, img_num
+
+
def convert_rec_attention_infer_res(preds):
img_num = preds.shape[0]
target_lod = [0]
diff --git a/ppocr/utils/save_load.py b/ppocr/utils/save_load.py
old mode 100755
new mode 100644
index 80f64dc5835d1a33a9f746f71715f9ea202310da..f2346b3d22bbc4cf2e6d3ac54eaa1b378df6338b
--- a/ppocr/utils/save_load.py
+++ b/ppocr/utils/save_load.py
@@ -114,15 +114,15 @@ def init_model(config, program, exe):
fluid.load(program, path, exe)
logger.info("Finish initing model from {}".format(path))
else:
- raise ValueError(
- "Model checkpoints {} does not exists,"
- "check if you lost the file prefix.".format(checkpoints + '.pdparams'))
-
- pretrain_weights = config['Global'].get('pretrain_weights')
- if pretrain_weights:
- path = pretrain_weights
- load_params(exe, program, path)
- logger.info("Finish initing model from {}".format(path))
+ raise ValueError("Model checkpoints {} does not exists,"
+ "check if you lost the file prefix.".format(
+ checkpoints + '.pdparams'))
+ else:
+ pretrain_weights = config['Global'].get('pretrain_weights')
+ if pretrain_weights:
+ path = pretrain_weights
+ load_params(exe, program, path)
+ logger.info("Finish initing model from {}".format(path))
def save_model(program, model_path):
diff --git a/tools/eval.py b/tools/eval.py
index edd84a9dd98133c1e700eb35561a3ab287bd3162..22185911db073cd096c9590781609f03feea4fdb 100755
--- a/tools/eval.py
+++ b/tools/eval.py
@@ -18,9 +18,9 @@ from __future__ import print_function
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
def set_paddle_flags(**kwargs):
diff --git a/tools/eval_utils/eval_det_iou.py b/tools/eval_utils/eval_det_iou.py
index 2f5ff2f10f1d5ecd1d33d2c742633945288ffe4c..64405984e3b90aa58f21324e727713075c5c4dc4 100644
--- a/tools/eval_utils/eval_det_iou.py
+++ b/tools/eval_utils/eval_det_iou.py
@@ -88,8 +88,8 @@ class DetectionIoUEvaluator(object):
points = gt[n]['points']
# transcription = gt[n]['text']
dontCare = gt[n]['ignore']
- points = Polygon(points)
- points = points.buffer(0)
+# points = Polygon(points)
+# points = points.buffer(0)
if not Polygon(points).is_valid or not Polygon(points).is_simple:
continue
@@ -105,8 +105,8 @@ class DetectionIoUEvaluator(object):
for n in range(len(pred)):
points = pred[n]['points']
- points = Polygon(points)
- points = points.buffer(0)
+# points = Polygon(points)
+# points = points.buffer(0)
if not Polygon(points).is_valid or not Polygon(points).is_simple:
continue
diff --git a/tools/eval_utils/eval_rec_utils.py b/tools/eval_utils/eval_rec_utils.py
index aebb9f903a4d84288da0d7b042ea39e1759456b5..4479d9dff2c352c54a26ac1bfdbddab497fff418 100644
--- a/tools/eval_utils/eval_rec_utils.py
+++ b/tools/eval_utils/eval_rec_utils.py
@@ -29,7 +29,7 @@ FORMAT = '%(asctime)s-%(levelname)s: %(message)s'
logging.basicConfig(level=logging.INFO, format=FORMAT)
logger = logging.getLogger(__name__)
-from ppocr.utils.character import cal_predicts_accuracy
+from ppocr.utils.character import cal_predicts_accuracy, cal_predicts_accuracy_srn
from ppocr.utils.character import convert_rec_label_to_lod
from ppocr.utils.character import convert_rec_attention_infer_res
from ppocr.utils.utility import create_module
@@ -60,22 +60,60 @@ def eval_rec_run(exe, config, eval_info_dict, mode):
for ino in range(img_num):
img_list.append(data[ino][0])
label_list.append(data[ino][1])
- img_list = np.concatenate(img_list, axis=0)
- outs = exe.run(eval_info_dict['program'], \
+
+ if config['Global']['loss_type'] != "srn":
+ img_list = np.concatenate(img_list, axis=0)
+ outs = exe.run(eval_info_dict['program'], \
feed={'image': img_list}, \
fetch_list=eval_info_dict['fetch_varname_list'], \
return_numpy=False)
- preds = np.array(outs[0])
- if preds.shape[1] != 1:
- preds, preds_lod = convert_rec_attention_infer_res(preds)
+ preds = np.array(outs[0])
+
+ if config['Global']['loss_type'] == "attention":
+ preds, preds_lod = convert_rec_attention_infer_res(preds)
+ else:
+ preds_lod = outs[0].lod()[0]
+ labels, labels_lod = convert_rec_label_to_lod(label_list)
+ acc, acc_num, sample_num = cal_predicts_accuracy(
+ char_ops, preds, preds_lod, labels, labels_lod,
+ is_remove_duplicate)
else:
- preds_lod = outs[0].lod()[0]
- labels, labels_lod = convert_rec_label_to_lod(label_list)
- acc, acc_num, sample_num = cal_predicts_accuracy(
- char_ops, preds, preds_lod, labels, labels_lod, is_remove_duplicate)
+ encoder_word_pos_list = []
+ gsrm_word_pos_list = []
+ gsrm_slf_attn_bias1_list = []
+ gsrm_slf_attn_bias2_list = []
+ for ino in range(img_num):
+ encoder_word_pos_list.append(data[ino][2])
+ gsrm_word_pos_list.append(data[ino][3])
+ gsrm_slf_attn_bias1_list.append(data[ino][4])
+ gsrm_slf_attn_bias2_list.append(data[ino][5])
+
+ img_list = np.concatenate(img_list, axis=0)
+ label_list = np.concatenate(label_list, axis=0)
+ encoder_word_pos_list = np.concatenate(
+ encoder_word_pos_list, axis=0).astype(np.int64)
+ gsrm_word_pos_list = np.concatenate(
+ gsrm_word_pos_list, axis=0).astype(np.int64)
+ gsrm_slf_attn_bias1_list = np.concatenate(
+ gsrm_slf_attn_bias1_list, axis=0).astype(np.float32)
+ gsrm_slf_attn_bias2_list = np.concatenate(
+ gsrm_slf_attn_bias2_list, axis=0).astype(np.float32)
+
+ labels = label_list
+
+ outs = exe.run(eval_info_dict['program'], \
+ feed={'image': img_list, 'encoder_word_pos': encoder_word_pos_list,
+ 'gsrm_word_pos': gsrm_word_pos_list, 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1_list,
+ 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2_list}, \
+ fetch_list=eval_info_dict['fetch_varname_list'], \
+ return_numpy=False)
+ preds = np.array(outs[0])
+ acc, acc_num, sample_num = cal_predicts_accuracy_srn(
+ char_ops, preds, labels, config['Global']['max_text_length'])
+
total_acc_num += acc_num
total_sample_num += sample_num
- logger.info("eval batch id: {}, acc: {}".format(total_batch_num, acc))
+ #logger.info("eval batch id: {}, acc: {}".format(total_batch_num, acc))
total_batch_num += 1
avg_acc = total_acc_num * 1.0 / total_sample_num
metrics = {'avg_acc': avg_acc, "total_acc_num": total_acc_num, \
diff --git a/tools/export_model.py b/tools/export_model.py
index de4ba0e4c44fec1cd2427bfe7c9065639eef26e2..0bd06b98dcacc06893becbefacbea198c432bc39 100644
--- a/tools/export_model.py
+++ b/tools/export_model.py
@@ -18,9 +18,9 @@ from __future__ import print_function
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
def set_paddle_flags(**kwargs):
diff --git a/tools/infer/predict_det.py b/tools/infer/predict_det.py
index 75644aeb990ab95edb51f2809bb8cc8fbdf3e2be..82877c0ef71b56e6afb4ea43725981640c0e8c64 100755
--- a/tools/infer/predict_det.py
+++ b/tools/infer/predict_det.py
@@ -13,19 +13,21 @@
# limitations under the License.
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '../..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '../..')))
import tools.infer.utility as utility
from ppocr.utils.utility import initial_logger
logger = initial_logger()
from ppocr.utils.utility import get_image_file_list, check_and_read_gif
import cv2
+from ppocr.data.det.sast_process import SASTProcessTest
from ppocr.data.det.east_process import EASTProcessTest
from ppocr.data.det.db_process import DBProcessTest
from ppocr.postprocess.db_postprocess import DBPostProcess
from ppocr.postprocess.east_postprocess import EASTPostPocess
+from ppocr.postprocess.sast_postprocess import SASTPostProcess
import copy
import numpy as np
import math
@@ -52,6 +54,20 @@ class TextDetector(object):
postprocess_params["cover_thresh"] = args.det_east_cover_thresh
postprocess_params["nms_thresh"] = args.det_east_nms_thresh
self.postprocess_op = EASTPostPocess(postprocess_params)
+ elif self.det_algorithm == "SAST":
+ self.preprocess_op = SASTProcessTest(preprocess_params)
+ postprocess_params["score_thresh"] = args.det_sast_score_thresh
+ postprocess_params["nms_thresh"] = args.det_sast_nms_thresh
+ self.det_sast_polygon = args.det_sast_polygon
+ if self.det_sast_polygon:
+ postprocess_params["sample_pts_num"] = 6
+ postprocess_params["expand_scale"] = 1.2
+ postprocess_params["shrink_ratio_of_width"] = 0.2
+ else:
+ postprocess_params["sample_pts_num"] = 2
+ postprocess_params["expand_scale"] = 1.0
+ postprocess_params["shrink_ratio_of_width"] = 0.3
+ self.postprocess_op = SASTPostProcess(postprocess_params)
else:
logger.info("unknown det_algorithm:{}".format(self.det_algorithm))
sys.exit(0)
@@ -84,7 +100,7 @@ class TextDetector(object):
return rect
def clip_det_res(self, points, img_height, img_width):
- for pno in range(4):
+ for pno in range(points.shape[0]):
points[pno, 0] = int(min(max(points[pno, 0], 0), img_width - 1))
points[pno, 1] = int(min(max(points[pno, 1], 0), img_height - 1))
return points
@@ -103,6 +119,15 @@ class TextDetector(object):
dt_boxes = np.array(dt_boxes_new)
return dt_boxes
+ def filter_tag_det_res_only_clip(self, dt_boxes, image_shape):
+ img_height, img_width = image_shape[0:2]
+ dt_boxes_new = []
+ for box in dt_boxes:
+ box = self.clip_det_res(box, img_height, img_width)
+ dt_boxes_new.append(box)
+ dt_boxes = np.array(dt_boxes_new)
+ return dt_boxes
+
def __call__(self, img):
ori_im = img.copy()
im, ratio_list = self.preprocess_op(img)
@@ -120,11 +145,20 @@ class TextDetector(object):
if self.det_algorithm == "EAST":
outs_dict['f_geo'] = outputs[0]
outs_dict['f_score'] = outputs[1]
+ elif self.det_algorithm == 'SAST':
+ outs_dict['f_border'] = outputs[0]
+ outs_dict['f_score'] = outputs[1]
+ outs_dict['f_tco'] = outputs[2]
+ outs_dict['f_tvo'] = outputs[3]
else:
outs_dict['maps'] = outputs[0]
+
dt_boxes_list = self.postprocess_op(outs_dict, [ratio_list])
dt_boxes = dt_boxes_list[0]
- dt_boxes = self.filter_tag_det_res(dt_boxes, ori_im.shape)
+ if self.det_algorithm == "SAST" and self.det_sast_polygon:
+ dt_boxes = self.filter_tag_det_res_only_clip(dt_boxes, ori_im.shape)
+ else:
+ dt_boxes = self.filter_tag_det_res(dt_boxes, ori_im.shape)
elapse = time.time() - starttime
return dt_boxes, elapse
diff --git a/tools/infer/predict_rec.py b/tools/infer/predict_rec.py
index 578401fb37b64dbf248460acb2b4c671e6a6724c..c81b4eb2560ee5ad66a85c96efe4de935a2beee1 100755
--- a/tools/infer/predict_rec.py
+++ b/tools/infer/predict_rec.py
@@ -40,7 +40,8 @@ class TextRecognizer(object):
char_ops_params = {
"character_type": args.rec_char_type,
"character_dict_path": args.rec_char_dict_path,
- "use_space_char": args.use_space_char
+ "use_space_char": args.use_space_char,
+ "max_text_length": args.max_text_length
}
if self.rec_algorithm != "RARE":
char_ops_params['loss_type'] = 'ctc'
@@ -122,9 +123,9 @@ class TextRecognizer(object):
ind = np.argmax(probs, axis=1)
blank = probs.shape[1]
valid_ind = np.where(ind != (blank - 1))[0]
- score = np.mean(probs[valid_ind, ind[valid_ind]])
if len(valid_ind) == 0:
continue
+ score = np.mean(probs[valid_ind, ind[valid_ind]])
# rec_res.append([preds_text, score])
rec_res[indices[beg_img_no + rno]] = [preds_text, score]
else:
diff --git a/tools/infer/utility.py b/tools/infer/utility.py
index bab211a8dfa396b1aa8996cf7105452316441ed1..392bc4dfa5831ab64c0ed920ea3e9bfdea04925d 100755
--- a/tools/infer/utility.py
+++ b/tools/infer/utility.py
@@ -53,12 +53,18 @@ def parse_args():
parser.add_argument("--det_east_cover_thresh", type=float, default=0.1)
parser.add_argument("--det_east_nms_thresh", type=float, default=0.2)
+ #SAST parmas
+ parser.add_argument("--det_sast_score_thresh", type=float, default=0.5)
+ parser.add_argument("--det_sast_nms_thresh", type=float, default=0.2)
+ parser.add_argument("--det_sast_polygon", type=bool, default=False)
+
#params for text recognizer
parser.add_argument("--rec_algorithm", type=str, default='CRNN')
parser.add_argument("--rec_model_dir", type=str)
parser.add_argument("--rec_image_shape", type=str, default="3, 32, 320")
parser.add_argument("--rec_char_type", type=str, default='ch')
parser.add_argument("--rec_batch_num", type=int, default=30)
+ parser.add_argument("--max_text_length", type=int, default=25)
parser.add_argument(
"--rec_char_dict_path",
type=str,
@@ -95,7 +101,7 @@ def create_predictor(args, mode):
config.set_cpu_math_library_num_threads(6)
if args.enable_mkldnn:
config.enable_mkldnn()
-
+
#config.enable_memory_optim()
config.disable_glog_info()
@@ -169,7 +175,7 @@ def draw_ocr_box_txt(image, boxes, txts):
img_right = Image.new('RGB', (w, h), (255, 255, 255))
import random
- # 每次使用相同的随机种子 ,可以保证两次颜色一致
+
random.seed(0)
draw_left = ImageDraw.Draw(img_left)
draw_right = ImageDraw.Draw(img_right)
diff --git a/tools/infer_det.py b/tools/infer_det.py
index a8b49b6b075ba509e17c37c8d1f05dee9822edec..1e7fdcc46a1d2f47a7928d6dc171ae393b15f901 100755
--- a/tools/infer_det.py
+++ b/tools/infer_det.py
@@ -22,9 +22,9 @@ import json
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
def set_paddle_flags(**kwargs):
@@ -134,8 +134,10 @@ def main():
dic = {'f_score': outs[0], 'f_geo': outs[1]}
elif config['Global']['algorithm'] == 'DB':
dic = {'maps': outs[0]}
+ elif config['Global']['algorithm'] == 'SAST':
+ dic = {'f_score': outs[0], 'f_border': outs[1], 'f_tvo': outs[2], 'f_tco': outs[3]}
else:
- raise Exception("only support algorithm: ['EAST', 'DB']")
+ raise Exception("only support algorithm: ['EAST', 'DB', 'SAST']")
dt_boxes_list = postprocess(dic, ratio_list)
for ino in range(img_num):
dt_boxes = dt_boxes_list[ino]
@@ -149,7 +151,7 @@ def main():
fout.write(otstr.encode())
src_img = cv2.imread(img_name)
draw_det_res(dt_boxes, config, src_img, img_name)
-
+
logger.info("success!")
diff --git a/tools/infer_rec.py b/tools/infer_rec.py
index 8cde44d8ef91355c48f9081cf706e340ddb04820..7a81b3d4cedc26616fa1194baa9e4431c2176150 100755
--- a/tools/infer_rec.py
+++ b/tools/infer_rec.py
@@ -19,9 +19,9 @@ from __future__ import print_function
import numpy as np
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
def set_paddle_flags(**kwargs):
@@ -64,7 +64,6 @@ def main():
exe = fluid.Executor(place)
rec_model = create_module(config['Architecture']['function'])(params=config)
-
startup_prog = fluid.Program()
eval_prog = fluid.Program()
with fluid.program_guard(eval_prog, startup_prog):
@@ -86,10 +85,36 @@ def main():
for i in range(max_img_num):
logger.info("infer_img:%s" % infer_list[i])
img = next(blobs)
- predict = exe.run(program=eval_prog,
- feed={"image": img},
- fetch_list=fetch_varname_list,
- return_numpy=False)
+ if loss_type != "srn":
+ predict = exe.run(program=eval_prog,
+ feed={"image": img},
+ fetch_list=fetch_varname_list,
+ return_numpy=False)
+ else:
+ encoder_word_pos_list = []
+ gsrm_word_pos_list = []
+ gsrm_slf_attn_bias1_list = []
+ gsrm_slf_attn_bias2_list = []
+ encoder_word_pos_list.append(img[1])
+ gsrm_word_pos_list.append(img[2])
+ gsrm_slf_attn_bias1_list.append(img[3])
+ gsrm_slf_attn_bias2_list.append(img[4])
+
+ encoder_word_pos_list = np.concatenate(
+ encoder_word_pos_list, axis=0).astype(np.int64)
+ gsrm_word_pos_list = np.concatenate(
+ gsrm_word_pos_list, axis=0).astype(np.int64)
+ gsrm_slf_attn_bias1_list = np.concatenate(
+ gsrm_slf_attn_bias1_list, axis=0).astype(np.float32)
+ gsrm_slf_attn_bias2_list = np.concatenate(
+ gsrm_slf_attn_bias2_list, axis=0).astype(np.float32)
+
+ predict = exe.run(program=eval_prog, \
+ feed={'image': img[0], 'encoder_word_pos': encoder_word_pos_list,
+ 'gsrm_word_pos': gsrm_word_pos_list, 'gsrm_slf_attn_bias1': gsrm_slf_attn_bias1_list,
+ 'gsrm_slf_attn_bias2': gsrm_slf_attn_bias2_list}, \
+ fetch_list=fetch_varname_list, \
+ return_numpy=False)
if loss_type == "ctc":
preds = np.array(predict[0])
preds = preds.reshape(-1)
@@ -114,7 +139,18 @@ def main():
score = np.mean(probs[0, 1:end_pos[1]])
preds = preds.reshape(-1)
preds_text = char_ops.decode(preds)
-
+ elif loss_type == "srn":
+ cur_pred = []
+ preds = np.array(predict[0])
+ preds = preds.reshape(-1)
+ probs = np.array(predict[1])
+ ind = np.argmax(probs, axis=1)
+ valid_ind = np.where(preds != 37)[0]
+ if len(valid_ind) == 0:
+ continue
+ score = np.mean(probs[valid_ind, ind[valid_ind]])
+ preds = preds[:valid_ind[-1] + 1]
+ preds_text = char_ops.decode(preds)
logger.info("\t index: {}".format(preds))
logger.info("\t word : {}".format(preds_text))
logger.info("\t score: {}".format(score))
diff --git a/tools/program.py b/tools/program.py
index 4ebc11670702ba627c89b060692c9827e6e163fd..6d8b9937bff7e70f018b069467525669e7001aae 100755
--- a/tools/program.py
+++ b/tools/program.py
@@ -32,7 +32,8 @@ from eval_utils.eval_det_utils import eval_det_run
from eval_utils.eval_rec_utils import eval_rec_run
from ppocr.utils.save_load import save_model
import numpy as np
-from ppocr.utils.character import cal_predicts_accuracy, CharacterOps
+from ppocr.utils.character import cal_predicts_accuracy, cal_predicts_accuracy_srn, CharacterOps
+
class ArgsParser(ArgumentParser):
def __init__(self):
@@ -81,10 +82,8 @@ default_config = {'Global': {'debug': False, }}
def load_config(file_path):
"""
Load config from yml/yaml file.
-
Args:
file_path (str): Path of the config file to be loaded.
-
Returns: global config
"""
merge_config(default_config)
@@ -103,10 +102,8 @@ def load_config(file_path):
def merge_config(config):
"""
Merge config into global config.
-
Args:
config (dict): Config to be merged.
-
Returns: global config
"""
for key, value in config.items():
@@ -157,13 +154,11 @@ def build(config, main_prog, startup_prog, mode):
3. create a model
4. create fetchs
5. create an optimizer
-
Args:
config(dict): config
main_prog(): main program
startup_prog(): startup program
is_train(bool): train or valid
-
Returns:
dataloader(): a bridge between the model and the data
fetchs(dict): dict of model outputs(included loss and measures)
@@ -176,8 +171,16 @@ def build(config, main_prog, startup_prog, mode):
fetch_name_list = list(outputs.keys())
fetch_varname_list = [outputs[v].name for v in fetch_name_list]
opt_loss_name = None
+ model_average = None
+ img_loss_name = None
+ word_loss_name = None
if mode == "train":
opt_loss = outputs['total_loss']
+ # srn loss
+ #img_loss = outputs['img_loss']
+ #word_loss = outputs['word_loss']
+ #img_loss_name = img_loss.name
+ #word_loss_name = word_loss.name
opt_params = config['Optimizer']
optimizer = create_module(opt_params['function'])(opt_params)
optimizer.minimize(opt_loss)
@@ -185,7 +188,17 @@ def build(config, main_prog, startup_prog, mode):
global_lr = optimizer._global_learning_rate()
fetch_name_list.insert(0, "lr")
fetch_varname_list.insert(0, global_lr.name)
- return (dataloader, fetch_name_list, fetch_varname_list, opt_loss_name)
+ if "loss_type" in config["Global"]:
+ if config['Global']["loss_type"] == 'srn':
+ model_average = fluid.optimizer.ModelAverage(
+ config['Global']['average_window'],
+ min_average_window=config['Global'][
+ 'min_average_window'],
+ max_average_window=config['Global'][
+ 'max_average_window'])
+
+ return (dataloader, fetch_name_list, fetch_varname_list, opt_loss_name,
+ model_average)
def build_export(config, main_prog, startup_prog):
@@ -329,14 +342,20 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict):
lr = np.mean(np.array(train_outs[fetch_map['lr']]))
preds_idx = fetch_map['decoded_out']
preds = np.array(train_outs[preds_idx])
- preds_lod = train_outs[preds_idx].lod()[0]
labels_idx = fetch_map['label']
labels = np.array(train_outs[labels_idx])
- labels_lod = train_outs[labels_idx].lod()[0]
- acc, acc_num, img_num = cal_predicts_accuracy(
- config['Global']['char_ops'], preds, preds_lod, labels,
- labels_lod)
+ if config['Global']['loss_type'] != 'srn':
+ preds_lod = train_outs[preds_idx].lod()[0]
+ labels_lod = train_outs[labels_idx].lod()[0]
+
+ acc, acc_num, img_num = cal_predicts_accuracy(
+ config['Global']['char_ops'], preds, preds_lod, labels,
+ labels_lod)
+ else:
+ acc, acc_num, img_num = cal_predicts_accuracy_srn(
+ config['Global']['char_ops'], preds, labels,
+ config['Global']['max_text_length'])
t2 = time.time()
train_batch_elapse = t2 - t1
stats = {'loss': loss, 'acc': acc}
@@ -350,6 +369,9 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict):
if train_batch_id > 0 and\
train_batch_id % eval_batch_step == 0:
+ model_average = train_info_dict['model_average']
+ if model_average != None:
+ model_average.apply(exe)
metrics = eval_rec_run(exe, config, eval_info_dict, "eval")
eval_acc = metrics['avg_acc']
eval_sample_num = metrics['total_sample_num']
@@ -375,6 +397,7 @@ def train_eval_rec_run(config, exe, train_info_dict, eval_info_dict):
save_model(train_info_dict['train_program'], save_path)
return
+
def preprocess():
FLAGS = ArgsParser().parse_args()
config = load_config(FLAGS.config)
@@ -386,15 +409,15 @@ def preprocess():
check_gpu(use_gpu)
alg = config['Global']['algorithm']
- assert alg in ['EAST', 'DB', 'Rosetta', 'CRNN', 'STARNet', 'RARE']
- if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE']:
+ assert alg in ['EAST', 'DB', 'SAST', 'Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN']
+ if alg in ['Rosetta', 'CRNN', 'STARNet', 'RARE', 'SRN']:
config['Global']['char_ops'] = CharacterOps(config['Global'])
place = fluid.CUDAPlace(0) if use_gpu else fluid.CPUPlace()
startup_program = fluid.Program()
train_program = fluid.Program()
- if alg in ['EAST', 'DB']:
+ if alg in ['EAST', 'DB', 'SAST']:
train_alg_type = 'det'
else:
train_alg_type = 'rec'
diff --git a/tools/train.py b/tools/train.py
index 68e792b7331a9d47ca6744ea1a9f362979d75542..300705e0288c86ec1e9f2a00ef7311cd7c8e9897 100755
--- a/tools/train.py
+++ b/tools/train.py
@@ -18,9 +18,9 @@ from __future__ import print_function
import os
import sys
-__dir__ = os.path.dirname(__file__)
+__dir__ = os.path.dirname(os.path.abspath(__file__))
sys.path.append(__dir__)
-sys.path.append(os.path.join(__dir__, '..'))
+sys.path.append(os.path.abspath(os.path.join(__dir__, '..')))
def set_paddle_flags(**kwargs):
@@ -52,6 +52,7 @@ def main():
train_fetch_name_list = train_build_outputs[1]
train_fetch_varname_list = train_build_outputs[2]
train_opt_loss_name = train_build_outputs[3]
+ model_average = train_build_outputs[-1]
eval_program = fluid.Program()
eval_build_outputs = program.build(
@@ -85,7 +86,8 @@ def main():
'train_program':train_program,\
'reader':train_loader,\
'fetch_name_list':train_fetch_name_list,\
- 'fetch_varname_list':train_fetch_varname_list}
+ 'fetch_varname_list':train_fetch_varname_list,\
+ 'model_average': model_average}
eval_info_dict = {'program':eval_program,\
'reader':eval_reader,\