diff --git a/doc/doc_ch/dataset/kie_datasets.md b/doc/doc_ch/dataset/kie_datasets.md
index 7f8d14cbc4ad724621f28c7d6ca1f8c2ac79f097..be5624dbf257150745a79db25f0367ccee339559 100644
--- a/doc/doc_ch/dataset/kie_datasets.md
+++ b/doc/doc_ch/dataset/kie_datasets.md
@@ -1,6 +1,6 @@
# 关键信息抽取数据集
-这里整理了常见的DocVQA数据集,持续更新中,欢迎各位小伙伴贡献数据集~
+这里整理了常见的关键信息抽取数据集,持续更新中,欢迎各位小伙伴贡献数据集~
- [FUNSD数据集](#funsd)
- [XFUND数据集](#xfund)
diff --git a/doc/doc_en/dataset/kie_datasets_en.md b/doc/doc_en/dataset/kie_datasets_en.md
index 3a8b744fc0b2653aab5c1435996a2ef73dd336e4..7b476f77d0380496d026c448937e59b23ee24c87 100644
--- a/doc/doc_en/dataset/kie_datasets_en.md
+++ b/doc/doc_en/dataset/kie_datasets_en.md
@@ -1,9 +1,10 @@
-## Key Imnformation Extraction dataset
+## Key Information Extraction dataset
+
+Here are the common datasets key information extraction, which are being updated continuously. Welcome to contribute datasets.
-Here are the common DocVQA datasets, which are being updated continuously. Welcome to contribute datasets.
- [FUNSD dataset](#funsd)
- [XFUND dataset](#xfund)
-- [wildreceipt dataset](#wildreceipt数据集)
+- [wildreceipt dataset](#wildreceipt-dataset)
#### 1. FUNSD dataset
@@ -20,7 +21,8 @@ Here are the common DocVQA datasets, which are being updated continuously. Welco
#### 2. XFUND dataset
- **Data source**: https://github.com/doc-analysis/XFUND
-- **Data introduction**: XFUND is a multilingual form comprehension dataset, which contains form data in 7 different languages, and all are manually annotated in the form of key-value pairs. The data for each language contains 199 form data, which are divided into 149 training sets and 50 test sets. Part of the image and the annotation box visualization are shown below:
+- **Data introduction**: XFUND is a multilingual form comprehension dataset, which contains form data in 7 different languages, and all are manually annotated in the form of key-value pairs. The data for each language contains 199 form data, which are divided into 149 training sets and 50 test sets. Part of the image and the annotation box visualization are shown below.
+
diff --git a/doc/doc_en/dataset/layout_datasets_en.md b/doc/doc_en/dataset/layout_datasets_en.md
new file mode 100644
index 0000000000000000000000000000000000000000..54c88609d0f25f65b4878fac96a43de5f1cc3164
--- /dev/null
+++ b/doc/doc_en/dataset/layout_datasets_en.md
@@ -0,0 +1,55 @@
+## Layout Analysis Dataset
+
+Here are the common datasets of layout anlysis, which are being updated continuously. Welcome to contribute datasets.
+
+- [PubLayNet dataset](#publaynet)
+- [CDLA dataset](#CDLA)
+- [TableBank dataset](#TableBank)
+
+
+Most of the layout analysis datasets are object detection datasets. In addition to open source datasets, you can also label or synthesize datasets using tools such as [labelme](https://github.com/wkentaro/labelme) and so on.
+
+
+
+
+#### 1. PubLayNet dataset
+
+- **Data source**: https://github.com/ibm-aur-nlp/PubLayNet
+- **Data introduction**: The PubLayNet dataset contains 350000 training images and 11000 validation images. There are 5 categories in total, namely: `text, title, list, table, figure`. Some images and their annotations as shown below.
+
+
+
+
+
+
+- **Download address**: https://developer.ibm.com/exchanges/data/all/publaynet/
+- **Note**: When using this dataset, you need to follow [CDLA-Permissive](https://cdla.io/permissive-1-0/) license.
+
+
+
+
+#### 2、CDLA数据集
+- **Data source**: https://github.com/buptlihang/CDLA
+- **Data introduction**: CDLA dataset contains 5000 training images and 1000 validation images with 10 categories, which are `Text, Title, Figure, Figure caption, Table, Table caption, Header, Footer, Reference, Equation`. Some images and their annotations as shown below.
+
+
+
+
+
+
+- **Download address**: https://github.com/buptlihang/CDLA
+- **Note**: When you train detection model on CDLA dataset using [PaddleDetection](https://github.com/PaddlePaddle/PaddleDetection/tree/develop), you need to remove the label `__ignore__` and `_background_`.
+
+
+
+#### 3、TableBank dataet
+- **Data source**: https://doc-analysis.github.io/tablebank-page/index.html
+- **Data introduction**: TableBank dataset contains 2 types of document: Latex (187199 training images, 7265 validation images and 5719 testing images) and Word (73383 training images 2735 validation images and 2281 testing images). Some images and their annotations as shown below.
+
+
+
+
+
+
+- **Data source**: https://doc-analysis.github.io/tablebank-page/index.html
+- **Note**: When using this dataset, you need to follow [Apache-2.0](https://github.com/doc-analysis/TableBank/blob/master/LICENSE) license.
diff --git a/ppstructure/kie/requirements.txt b/ppstructure/kie/requirements.txt
index 53a7315d051704640b9a692ffaa52ce05fd16274..11fa98da1bff7a1863d8a077ca73435d15072523 100644
--- a/ppstructure/kie/requirements.txt
+++ b/ppstructure/kie/requirements.txt
@@ -1,7 +1,7 @@
sentencepiece
yacs
seqeval
-git+https://github.com/PaddlePaddle/PaddleNLP
pypandoc
attrdict
python_docx
+https://paddleocr.bj.bcebos.com/ppstructure/whl/paddlenlp-2.3.0.dev0-py3-none-any.whl