未验证 提交 7fe194c2 编写于 作者: jm_12138's avatar jm_12138 提交者: GitHub

Modules README CN to EN (#2027)

* CN to EN

* CN to EN

* fix typo

* EN
上级 afba7194
# Zero base Linux installation and image style transfer
## Step 1: Install Anaconda
- Note: To use paddlepaddle, you need to install the Python environment first. Here we choose the Python integrated environment Anaconda toolkit.
- Anaconda is a commonly used python package management program.
- After installing Anaconda, you can install the python environment and the toolkit environment required by numpy.
- **Download Anaconda**
- Download address: https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/?C=M&O=D
- <img src="../../imgs/Install_Related/linux/anaconda_download.png" akt="anaconda download" width="800" align="center"/>
- Select the version appropriate for your operating system
- You can enter `uname -m` at the terminal to query the instruction set used by the system
- Download method 1: Download locally, and then transfer the installation package to the Linux server
- Download method 2: directly use the Linux command line to download
- ```shell
# Install wget first
sudo apt-get install wget # Ubuntu
sudo yum install wget # CentOS
```
- ```shell
# Then use wget to download from Tsinghua Source
# To download Anaconda3-2021.05-Linux-x86_64.sh, the download command is as follows:
wget https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/Anaconda3-2021.05-Linux-x86_64.sh
# If you want to download other versions, you need to change the last/last file name to the version you want to download
```
- To install Anaconda:
- At the command line, enter `sh Anaconda3-2021.05-Linux-x86_64.sh`
- If you download another version, replace the file name of the command with the file name you downloaded
- Just follow the installation prompts
- When viewing the license, you can enter q to exit
- **Add conda to the environment variable**
- The environment variable is added to enable the system to recognize the conda command. If you have added conda to the environment variable path during installation, you can skip this step
- Open `~/.bashrc` in the terminal:
- ```shell
# Enter the following command in the terminal:
vim ~/.bashrc
```
- Add conda as an environment variable in `~/.bashrc`:
- ```shell
# Press i first to enter editing mode
# On the first line, enter:
export PATH="~/anaconda3/bin:$PATH"
# If the installation location is customized during installation, change ~/anaconda3/bin to the bin folder under the customized installation directory
```
- ```shell
# Modified ~/.bash_profile file should be as follows (where xxx is the user name)::
export PATH="~/opt/anaconda3/bin:$PATH"
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/xxx/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/xxx/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/xxx/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/xxx/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
```
- After modification, press the `esc` key to exit editing mode, and then enter `:wq!` And enter to save the exit
- Verify that the conda command is recognized:
- Enter `source ~/.bash_profile` in the terminal to update environment variables
- Then enter `conda info --envs` on the terminal. If the current base environment can be displayed, conda has added an environment variable
## Step 2: Create a conda environment
- Create a new conda environment
- ```shell
# On the command line, enter the following command to create a file named paddle_env environment
# This is for accelerated download, use Tsinghua Source
conda create --name paddle_env python=3.8 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
```
- This command will create an executable environment named paddle_env with Python version 3.8. It will take a while depending on the network status
- Then the command line will output a prompt message, enter y and press Enter to continue the installation
- <img src="../../imgs/Install_Related/linux/conda_create.png" alt="conda_create" width="500" align="center"/>
- Activate the newly created conda environment, and enter the following command on the command line:
- ```shell
# Activate paddle_env environment
conda activate paddle_env
```
- The above anaconda environment and python environment have been installed
## Step 3: Install the libraries required by the program
- Use the pip command to install the paddle in the newly activated environment:
- ```shell
# On the command line, enter the following command:
# The CPU version is installed by default. Baidu Source is recommended when installing the paddlepaddle
pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
- After installing the PaddlePaddle, continue to install the paddlehub in the paddle_env environment:
- ```shell
# On the command line, enter the following command:
pip install paddlehub -i https://mirror.baidu.com/pypi/simple
```
- Introduction document of paddlehub: https://github.com/PaddlePaddle/PaddleHub/blob/develop/README.md
- When installing the paddlehub, other dependent libraries will be installed automatically, which may take a while
## Step 4: Install the paddlehub and download the model
- After installing the paddlehub, download the style migration model:
- ```shell
# Enter the following command on the command line
hub install stylepro_artistic==1.0.1
```
- Description document of the model: [https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value=%7B%22scenes%22%3A%5B%22GANs%22%5D%7D](https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value={"scenes"%3A["GANs"]})
- <img src="../../imgs/Install_Related/linux/hub_model_intro.png" alt="hub model intro" width="800" align="center"/>
## Step 5: Prepare the style to migrate data and code
### Prepare style migration data
- Create Working Directory `style_transfer` under User Directory `~`
- ```shell
# Enter the following command in the terminal:
cd ~ # Enter the user directory
mkdir style_transfer # Create style_transfer folder
cd style_transfer # Enter style_transfer directory
```
- Place pictures to be converted and style pictures respectively:
- Place the picture to be converted to `~/style_transfer/pic.jpg`
- <img src="../../imgs/Install_Related/linux/pic.jpg" alt="pic.jpg" width="400" align="center"/>
- Place style picture to `~/style_transfer/fangao.jpg`
- <img src="../../imgs/Install_Related/linux/fangao.jpg" alt="fangao.jpg" width="350" align="center"/>
### Code
- Create code file:
- ```shell
# The following commands are executed on the command line
$ pwd # Check whether the current directory is style_transfer, if not, enter: cd ~/style_transfer
$ touch style_transfer.py # Create an empty file
$ vim style_transfer.py # Open code file with vim editor
# Enter i first to enter editing mode
# Copy the code into the vim editor
# Press esc key to exit editing mode, then enter ": wq" and enter Enter to save and exit
```
- ```python
# Code
import paddlehub as hub
import cv2
# Relative address of the picture to be converted
picture = './pic.jpg'
# Relative address of style picture
style_image = './fangao.jpg'
# Create a style transfer network and load parameters
stylepro_artistic = hub.Module(name="stylepro_artistic")
# Read in pictures and start style conversion
result = stylepro_artistic.style_transfer(
images=[{'content': cv2.imread(picture),
'styles': [cv2.imread(style_image)]}],
visualization=True
)
```
- Running code:
- On the command line, enter `python style_transfer.py`
- When the program executes, a new folder `transfer_result` will be created, and save the converted file to this directory
- The output pictures are as follows:
- <img src="../../imgs/Install_Related/linux/output_img.png" alt="output image" width="600" align="center"/>
## Step 6: Explore the pre training model of flying oars
- Congratulations, the installation and introduction cases of PaddleHub in the Linux environment will be completed here. Start your more in-depth learning model exploration journey quickly.[【More model exploration, jump to the official website of PaddlePaddle】](https://www.paddlepaddle.org.cn/hublist)
# Zero base mac installation and image style transfer
## Step 1: Install Anaconda
- Note: To use paddlepaddle, you need to install the Python environment first. Here we choose the Python integrated environment Anaconda toolkit
- Anaconda is a commonly used python package management program
- After installing Anaconda, you can install the python environment and the toolkit environment required by numpy
- Anaconda Download:
- Link: https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/?C=M&O=D
- <img src="../../imgs/Install_Related/mac/anaconda_start.png" alt="anaconda download" width="800" align="center"/>
- Select the lowest `Anaconda3-2021.05-MacOSX-x86_64.pkg` download
- After downloading, double click the. pkg file to enter the graphical interface
- By default, the installation will take a while
- It is recommended to install a code editor such as vscode or pycharm
## Step 2: Open the terminal and create a conda environment
- Open terminal
- Press the command key and the space bar at the same time, enter "terminal" in the focus search, and double-click to enter the terminal
- **Add conda to the environment variable**
- The environment variable is added to enable the system to recognize the conda command
- Enter the following command to open `~/.bash_profile`
- ```shell
vim ~/.bash_profile
```
- In `~/.bash_profile` add conda as an environment variable:
- ```shell
# Press i first to enter editing mode
# On the first line, enter:
export PATH="~/opt/anaconda3/bin:$PATH"
# If the installation location is customized during installation, change ~/opt/anaconda3/bin to the bin folder under the customized installation directory
```
- ```shell
# Modified ~/.bash_profile file should be as follows (where xxx is the user name):
export PATH="~/opt/anaconda3/bin:$PATH"
# >>> conda initialize >>>
# !! Contents within this block are managed by 'conda init' !!
__conda_setup="$('/Users/xxx/opt/anaconda3/bin/conda' 'shell.bash' 'hook' 2> /dev/null)"
if [ $? -eq 0 ]; then
eval "$__conda_setup"
else
if [ -f "/Users/xxx/opt/anaconda3/etc/profile.d/conda.sh" ]; then
. "/Users/xxx/opt/anaconda3/etc/profile.d/conda.sh"
else
export PATH="/Users/xxx/opt/anaconda3/bin:$PATH"
fi
fi
unset __conda_setup
# <<< conda initialize <<<
```
- After modification, press the `esc` key to exit editing mode, and then enter `:wq!` And enter to save the exit
- Verify that the conda command is recognized:
- Enter `source ~/.bash_profile` to update environment variables
- Then enter `conda info --envs` on the terminal. If the current base environment can be displayed, conda has added an environment variable
- Create a new conda environment
- ```shell
# On the command line, enter the following command to create a file named paddle_env environment
# This is for accelerated download, use Tsinghua source
conda create --name paddle_env python=3.8 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/
```
- This command will create an executable environment named paddle_env with Python version 3.8. It will take a while depending on the network status
- Then the command line will output a prompt message, enter y and press Enter to continue the installation
- <img src="../../imgs/Install_Related/mac/conda_create.png" alt="conda_create" width="600" align="center"/>
- Activate the newly created conda environment, and enter the following command on the command line:
- ```shell
# Activate paddle_env environment
conda activate paddle_env
# View the current python location
where python
```
- <img src="../../imgs/Install_Related/mac/conda_activate.png" alt="conda_actviate" width="600" align="center"/>
- The above anaconda environment and python environment have been installed
## Step 3: Install the libraries required by the program
- Use the pip command to install the pad in the newly activated environment:
- ```shell
# Enter the following command on the command line
# Confirm whether the currently used pip is the pip in the paddle_env environment
where pip
# The CPU version is installed by default. Baidu Source is recommended when installing the PaddlePaddle
pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
- After installing Paddle, continue to install the PaddleHub in the paddle_env environment:
- ```shell
# Enter the following command on the command line
pip install paddlehub -i https://mirror.baidu.com/pypi/simple
```
- Introduction document of paddlehub: https://github.com/PaddlePaddle/PaddleHub/blob/develop/README.md
- When installing the paddlehub, other dependent libraries will be installed automatically, which may take a while
## Step 4: Install the paddlehub and download the model
- After installing the PaddleHub, download the style migration model:
- ```shell
# Enter the following command on the command line
hub install stylepro_artistic==1.0.1
```
- Description document of the model: [https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value=%7B%22scenes%22%3A%5B%22GANs%22%5D%7D](https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value={"scenes"%3A["GANs"]})
- <img src="../../imgs/Install_Related/mac/hub_model_intro.png" alt="hub model intro" width="800" align="center"/>
## Step 5: Prepare the style to migrate data and code
### Prepare style migration data
- Create Working Directory `style_transfer` on Desktop
- ```shell
# Enter the following command in the terminal:
cd ~/Desktop # Enter the desktop
mkdir style_transfer # Create style_transfer folder
cd style_transfer # Enter style_transfer directory
```
- Place pictures to be converted and style pictures respectively:
- Place the picture to be converted on the desktop `style_transfer/pic.jpg`
- <img src="../../imgs/Install_Related/mac/pic.jpg" alt="pic.jpg" width="400" align="center"/>
- Place Style Picture on Desktop `style_transfer/fangao.jpg`
- <img src="../../imgs/Install_Related/mac/fangao.jpg" alt="fangao.jpg" width="350" align="center"/>
### 代码
- In `style_transfer` create code file `style_transfer.py`
- Copy the following code into `style_transfer.py`
- ```python
import paddlehub as hub
import cv2
# Relative address of the picture to be converted
picture = './pic.jpg'
# Relative address of style picture
style_image = './fangao.jpg'
# Create a style transfer network and load parameters
stylepro_artistic = hub.Module(name="stylepro_artistic")
# Read in pictures and start style conversion
result = stylepro_artistic.style_transfer(
images=[{'content': cv2.imread(picture),
'styles': [cv2.imread(style_image)]}],
visualization=True
)
```
- If there is no code editor such as vscode, you can use the command line method:
- ```shell
pwd # Check whether the current directory is style_transfer, if not, enter: cd ~/Desktop/style_transfer
touch style_transfer.py # Create an empty file
vim style_transfer.py # Open code file with vim editor
# Enter i first to enter editing mode
# Copy the above code into the vim editor
# Press esc key to exit editing mode, then enter ": wq" and enter Enter to save and exit
```
- Running code:
- On the command line, enter `python style_transfer.py`
- When the program executes, a new folder `transfer_result` will be created, and save the converted file to this directory
- The output pictures are as follows:
- <img src="../../imgs/Install_Related/mac/output_img.png" alt="output image" width="600" align="center"/>
## Step 6: Explore the pre training model of flying oars
- Congratulations, the installation and introduction cases of PaddleHub in the Mac environment will be completed here. Start your more in-depth learning model exploration journey quickly.[【More model exploration, jump to the official website of PaddlePaddle】](https://www.paddlepaddle.org.cn/hublist)
# Zero base Windows installation and image style transfer
## Step 1: Install Anaconda
- Note: To use paddlepaddle, you need to install the Python environment first. Here we choose the Python integrated environment Anaconda toolkit
- Anaconda is a commonly used python package management program
- After installing Anaconda, you can install the python environment and the toolkit environment required by numpy.
- Anaconda Download:
- Link: https://mirrors.tuna.tsinghua.edu.cn/anaconda/archive/?C=M&O=D
- Most win10 computers are 64 bit operating systems, choose x86_64 version; If the computer is a 32-bit operating system, select x86.exe
- <img src="../../imgs/Install_Related/windows/Anaconda_download.png" alt="anaconda download" width="800" align="center"/>
- After downloading, double click the installer to enter the graphical interface
- The default installation location is Disk C, and it is recommended to change the installation location to Disk D:
- <img src="../../imgs/Install_Related/windows/anaconda_install_folder.png" alt="install config" width="500" align="center"/>
- Check conda to add environment variables, and ignore the warning:
- <img src="../../imgs/Install_Related/windows/anaconda_install_env.png" alt="add conda to path" width="500" align="center"/>
## Step 2: Open the terminal and create a conda environment
- Open Anaconda Prompt terminal
- Windows Start Menu -> Anaconda3 -> Anaconda Prompt
- <img src="../../imgs/Install_Related/windows/anaconda_prompt.png" alt="anaconda download" width="300" align="center"/>
- Create a new conda environment
- ```shell
# On the command line, enter the following command to create a file named paddle_env Env environment
# This is for accelerated download, use Tsinghua Source
conda create --name paddle_env python=3.8 --channel https://mirrors.tuna.tsinghua.edu.cn/anaconda/pkgs/free/ # a shell command
```
- This command will create an executable environment named paddle_env with Python version 3.8. It will take a while depending on the network status
- Then the command line will output a prompt message, enter y and press Enter to continue the installation
- <img src="../../imgs/Install_Related/windows/conda_new_env.png" alt="conda create" width="700" align="center"/>
- Activate the newly created conda environment, and enter the following command on the command line:
- ```shell
# Activate paddle_env environment
conda activate paddle_env
# View the current python location
where python
```
- <img src="../../imgs/Install_Related/windows/conda_list_env.png" alt="create environment" width="600" align="center"/>
- The above anaconda environment and python environment have been installed
## Step 3: The required libraries for the installer to run
- Use the pip command to install the PaddlePaddle in the environment you just activated
- ```shell
# Enter the following command on the command line
# Confirm whether the currently used pip is a pad_ Pip in env environment
where pip
# The CPU version is installed by default. Baidu Source is recommended when installing the paddle
pip install paddlepaddle -i https://mirror.baidu.com/pypi/simple
```
- If you need to install the GPU version, please open the [paddle official website](https://www.paddlepaddle.org.cn/) select the appropriate version.
- Paddle official website: https://www.paddlepaddle.org.cn/
- Since CUDA and cudnn need to be configured before installing the GPU version, it is recommended to install the GPU version after a certain foundation
- After installing the Paddle, continue to install the paddlehub in the paddle_env environment:
- ```shell
# Enter the following command on the command line
pip install paddlehub -i https://mirror.baidu.com/pypi/simple
```
- Introduction document of paddlehub: https://github.com/PaddlePaddle/PaddleHub/blob/develop/README.md
## Step 4: Install the paddlehub and download the model
- After installing the paddlehub, download the style migration model:
- ```shell
# Enter the following command on the command line
hub install stylepro_artistic==1.0.1
```
- Description document of the model: [https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value=%7B%22scenes%22%3A%5B%22GANs%22%5D%7D](https://www.paddlepaddle.org.cn/hubsearch?filter=en_category&value={"scenes"%3A["GANs"]})
- <img src="../../imgs/Install_Related/windows/paddlehub_modulelist.png" alt="model introduction" width="700" align="center"/>
## Step 5: Prepare the style to migrate data and code
### Prepare style migration data
- Switch Working Directory to `D:\style_transfer`, enter the following command on the command line
- ```shell
# Enter the following command on the command line
# Switch the current working directory to the root directory of disk D
D:
# Create style_transfer directory
mkdir style_transfer
# Switch the current directory to style_transfer directory
cd style_transfer
```
- Place pictures to be converted and style pictures respectively
- Place the picture to be converted to `D:\style_transfer\pic.jpg`
- <img src="../../imgs/Install_Related/windows/pic.jpg" alt="pic.jpg" width="400" align="center"/>
- Place Style Picture to `D:\style_transfer\fangao.jpg`
- <img src="../../imgs/Install_Related/windows/fangao.jpg" alt="fangao.jpg" width="350" align="center"/>
### Code
- In `D:\style_transfer` create code file `style_transfer.py`
- If there is no editor such as vscode, you can use Notepad to create a txt file first, and then change the file name to `style_transfer.py`
- Copy the following code into `style_transfer.py`
- ```python
import paddlehub as hub
import cv2
# The absolute address of the picture to be converted
picture = 'D:\\style_transfer\\pic.jpg' # Note that double backslashes are used in the code
# Absolute address of the style picture
style_image = 'D:\\style_transfer\\fangao.jpg'
# Create a style transfer network and load parameters
stylepro_artistic = hub.Module(name="stylepro_artistic")
# Read in pictures and start style conversion
result = stylepro_artistic.style_transfer(
images=[{'content': cv2.imread(picture),
'styles': [cv2.imread(style_image)]}],
visualization=True
)
```
- Running code:
- On the command line, enter `python style_transfer.py`
- When the program executes, a new folder `transfer_result` will be created, and save the converted file to this directory.
- The output picture is as follows:
- <img src="../../imgs/Install_Related/windows/after_transfer.png" alt="transferred image" width="600" align="center"/>
## Step 6: Explore the pre training model of flying oars
- Congratulations, the installation and introduction cases of PaddleHub in the Windows environment will be completed here. Start your more in-depth learning model exploration journey quickly.[【More model exploration, jump to the official website of PaddlePaddle】](https://www.paddlepaddle.org.cn/hublist)
......@@ -222,43 +222,44 @@ English | [简体中文](README_ch.md)
|module|Network|Dataset|Introduction|Huggingface Spaces Demo|
|--|--|--|--|--|
|[chinese_ocr_db_crnn_mobile](image/text_recognition/chinese_ocr_db_crnn_mobile)|Differentiable Binarization+RCNN|icdar2015数据集|中文文字识别|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/chinese_ocr_db_crnn_mobile) |[chinese_text_detection_db_mobile](image/text_recognition/chinese_text_detection_db_mobile)|Differentiable Binarization|icdar2015数据集|中文文本检测|
|[chinese_text_detection_db_server](image/text_recognition/chinese_text_detection_db_server)|Differentiable Binarization|icdar2015数据集|中文文本检测|
|[chinese_ocr_db_crnn_server](image/text_recognition/chinese_ocr_db_crnn_server)|Differentiable Binarization+RCNN|icdar2015数据集|中文文字识别|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/chinese_ocr_db_crnn_server) |
|[Vehicle_License_Plate_Recognition](image/text_recognition/Vehicle_License_Plate_Recognition)|-|CCPD|车牌识别|
|[chinese_cht_ocr_db_crnn_mobile](image/text_recognition/chinese_cht_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|繁体中文文字识别|
|[japan_ocr_db_crnn_mobile](image/text_recognition/japan_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|日文文字识别|
|[korean_ocr_db_crnn_mobile](image/text_recognition/korean_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|韩文文字识别|
|[german_ocr_db_crnn_mobile](image/text_recognition/german_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|德文文字识别|
|[french_ocr_db_crnn_mobile](image/text_recognition/french_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|法文文字识别|
|[latin_ocr_db_crnn_mobile](image/text_recognition/latin_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|拉丁文文字识别|
|[cyrillic_ocr_db_crnn_mobile](image/text_recognition/cyrillic_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|斯拉夫文文字识别|
|[multi_languages_ocr_db_crnn](image/text_recognition/multi_languages_ocr_db_crnn)|Differentiable Binarization+RCNN|icdar2015数据集|多语言文字识别|
|[kannada_ocr_db_crnn_mobile](image/text_recognition/kannada_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|卡纳达文文字识别|
|[arabic_ocr_db_crnn_mobile](image/text_recognition/arabic_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|阿拉伯文文字识别|
|[telugu_ocr_db_crnn_mobile](image/text_recognition/telugu_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|泰卢固文文字识别|
|[devanagari_ocr_db_crnn_mobile](image/text_recognition/devanagari_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|梵文文字识别|
|[tamil_ocr_db_crnn_mobile](image/text_recognition/tamil_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015数据集|泰米尔文文字识别|
|[chinese_ocr_db_crnn_mobile](image/text_recognition/chinese_ocr_db_crnn_mobile)|Differentiable Binarization+RCNN|icdar2015|Chinese text recognition|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/chinese_ocr_db_crnn_mobile) |
|[chinese_text_detection_db_mobile](image/text_recognition/chinese_text_detection_db_mobile)|Differentiable Binarization|icdar2015|Chinese text Detection|
|[chinese_text_detection_db_server](image/text_recognition/chinese_text_detection_db_server)|Differentiable Binarization|icdar2015|Chinese text Detection|
|[chinese_ocr_db_crnn_server](image/text_recognition/chinese_ocr_db_crnn_server)|Differentiable Binarization+RCNN|icdar2015|Chinese text recognition|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/chinese_ocr_db_crnn_server) |
|[Vehicle_License_Plate_Recognition](image/text_recognition/Vehicle_License_Plate_Recognition)|-|CCPD|Vehicle license plate recognition|
|[chinese_cht_ocr_db_crnn_mobile](image/text_recognition/chinese_cht_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Traditional Chinese text Detection|
|[japan_ocr_db_crnn_mobile](image/text_recognition/japan_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Japanese text recognition|
|[korean_ocr_db_crnn_mobile](image/text_recognition/korean_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Korean text recognition|
|[german_ocr_db_crnn_mobile](image/text_recognition/german_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|German text recognition|
|[french_ocr_db_crnn_mobile](image/text_recognition/french_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|French text recognition|
|[latin_ocr_db_crnn_mobile](image/text_recognition/latin_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Latin text recognition|
|[cyrillic_ocr_db_crnn_mobile](image/text_recognition/cyrillic_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Cyrillic text recognition|
|[multi_languages_ocr_db_crnn](image/text_recognition/multi_languages_ocr_db_crnn)|Differentiable Binarization+RCNN|icdar2015|Multi languages text recognition|
|[kannada_ocr_db_crnn_mobile](image/text_recognition/kannada_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Kannada text recognition|
|[arabic_ocr_db_crnn_mobile](image/text_recognition/arabic_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Arabic text recognition|
|[telugu_ocr_db_crnn_mobile](image/text_recognition/telugu_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Telugu text recognition|
|[devanagari_ocr_db_crnn_mobile](image/text_recognition/devanagari_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Devanagari text recognition|
|[tamil_ocr_db_crnn_mobile](image/text_recognition/tamil_ocr_db_crnn_mobile)|Differentiable Binarization+CRNN|icdar2015|Tamil text recognition|
- ### Image Editing
|module|Network|Dataset|Introduction|Huggingface Spaces Demo|
|--|--|--|--|--|
|[realsr](image/Image_editing/super_resolution/realsr)|LP-KPN|RealSR dataset|图像/视频超分-4倍|
|[deoldify](image/Image_editing/colorization/deoldify)|GAN|ILSVRC 2012|黑白照片/视频着色|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/deoldify) |
|[photo_restoration](image/Image_editing/colorization/photo_restoration)|基于deoldify和realsr模型|-|老照片修复|
|[user_guided_colorization](image/Image_editing/colorization/user_guided_colorization)|siggraph|ILSVRC 2012|图像着色|
|[falsr_c](image/Image_editing/super_resolution/falsr_c)|falsr_c| DIV2k|轻量化超分-2倍|
|[dcscn](image/Image_editing/super_resolution/dcscn)|dcscn| DIV2k|轻量化超分-2倍|
|[falsr_a](image/Image_editing/super_resolution/falsr_a)|falsr_a| DIV2k|轻量化超分-2倍|
|[falsr_b](image/Image_editing/super_resolution/falsr_b)|falsr_b|DIV2k|轻量化超分-2倍|
|[realsr](image/Image_editing/super_resolution/realsr)|LP-KPN|RealSR dataset|Image / Video super-resolution|
|[deoldify](image/Image_editing/colorization/deoldify)|GAN|ILSVRC 2012|Black-and-white image / video colorization|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/deoldify) |
|[photo_restoration](image/Image_editing/colorization/photo_restoration)|deoldify + realsr|-|Old photo restoration|
|[user_guided_colorization](image/Image_editing/colorization/user_guided_colorization)|siggraph|ILSVRC 2012|User guided colorization|
|[falsr_c](image/Image_editing/super_resolution/falsr_c)|falsr_c| DIV2k|Lightweight super resolution - 2x|
|[dcscn](image/Image_editing/super_resolution/dcscn)|dcscn| DIV2k|Lightweight super resolution - 2x|
|[falsr_a](image/Image_editing/super_resolution/falsr_a)|falsr_a| DIV2k|Lightweight super resolution - 2x|
|[falsr_b](image/Image_editing/super_resolution/falsr_b)|falsr_b|DIV2k|Lightweight super resolution - 2x|
- ### Instance Segmentation
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[solov2](image/instance_segmentation/solov2)|-|COCO2014|实例分割|
|[solov2](image/instance_segmentation/solov2)|-|COCO2014|Instance segmentation|
- ### Object Detection
......@@ -266,16 +267,16 @@ English | [简体中文](README_ch.md)
|--|--|--|--|
|[faster_rcnn_resnet50_coco2017](image/object_detection/faster_rcnn_resnet50_coco2017)|faster_rcnn|COCO2017||
|[ssd_vgg16_512_coco2017](image/object_detection/ssd_vgg16_512_coco2017)|SSD|COCO2017||
|[faster_rcnn_resnet50_fpn_venus](image/object_detection/faster_rcnn_resnet50_fpn_venus)|faster_rcnn|百度自建数据集|大规模通用目标检测|
|[faster_rcnn_resnet50_fpn_venus](image/object_detection/faster_rcnn_resnet50_fpn_venus)|faster_rcnn|Baidu self built dataset|Large-scale general detection|
|[ssd_vgg16_300_coco2017](image/object_detection/ssd_vgg16_300_coco2017)||||
|[yolov3_resnet34_coco2017](image/object_detection/yolov3_resnet34_coco2017)|YOLOv3|COCO2017||
|[yolov3_darknet53_pedestrian](image/object_detection/yolov3_darknet53_pedestrian)|YOLOv3|百度自建大规模行人数据集|行人检测|
|[yolov3_darknet53_pedestrian](image/object_detection/yolov3_darknet53_pedestrian)|YOLOv3|Baidu Self built large-scale pedestrian dataset|Pedestrian Detection|
|[yolov3_mobilenet_v1_coco2017](image/object_detection/yolov3_mobilenet_v1_coco2017)|YOLOv3|COCO2017||
|[ssd_mobilenet_v1_pascal](image/object_detection/ssd_mobilenet_v1_pascal)|SSD|PASCAL VOC||
|[faster_rcnn_resnet50_fpn_coco2017](image/object_detection/faster_rcnn_resnet50_fpn_coco2017)|faster_rcnn|COCO2017||
|[yolov3_darknet53_coco2017](image/object_detection/yolov3_darknet53_coco2017)|YOLOv3|COCO2017||
|[yolov3_darknet53_vehicles](image/object_detection/yolov3_darknet53_vehicles)|YOLOv3|百度自建大规模车辆数据集|车辆检测|
|[yolov3_darknet53_venus](image/object_detection/yolov3_darknet53_venus)|YOLOv3|百度自建数据集|大规模通用检测|
|[yolov3_darknet53_vehicles](image/object_detection/yolov3_darknet53_vehicles)|YOLOv3|Baidu Self built large-scale vehicles dataset|vehicles Detection|
|[yolov3_darknet53_venus](image/object_detection/yolov3_darknet53_venus)|YOLOv3|Baidu self built datasetset|Large-scale general detection|
|[yolov3_resnet50_vd_coco2017](image/object_detection/yolov3_resnet50_vd_coco2017)|YOLOv3|COCO2017||
- ### Depth Estimation
......@@ -290,22 +291,22 @@ English | [简体中文](README_ch.md)
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[ernie_gen](text/text_generation/ernie_gen)|ERNIE-GEN|-|面向生成任务的预训练-微调框架|
|[ernie_gen_poetry](text/text_generation/ernie_gen_poetry)|ERNIE-GEN|开源诗歌数据集|诗歌生成|
|[ernie_gen_couplet](text/text_generation/ernie_gen_couplet)|ERNIE-GEN|开源对联数据集|对联生成|
|[ernie_gen_lover_words](text/text_generation/ernie_gen_lover_words)|ERNIE-GEN|网络情诗、情话数据|情话生成|
|[ernie_tiny_couplet](text/text_generation/ernie_tiny_couplet)|Eernie_tiny|开源对联数据集|对联生成|
|[ernie_gen_acrostic_poetry](text/text_generation/ernie_gen_acrostic_poetry)|ERNIE-GEN|开源诗歌数据集|藏头诗生成|
|[Rumor_prediction](text/text_generation/Rumor_prediction)|-|新浪微博中文谣言数据|谣言预测|
|[plato-mini](text/text_generation/plato-mini)|Unified Transformer|十亿级别的中文对话数据|中文对话|
|[plato2_en_large](text/text_generation/plato2_en_large)|plato2|开放域多轮数据集|超大规模生成式对话|
|[plato2_en_base](text/text_generation/plato2_en_base)|plato2|开放域多轮数据集|超大规模生成式对话|
|[CPM_LM](text/text_generation/CPM_LM)|GPT-2|自建数据集|中文文本生成|
|[unified_transformer-12L-cn](text/text_generation/unified_transformer-12L-cn)|Unified Transformer|千万级别中文会话数据|人机多轮对话|
|[unified_transformer-12L-cn-luge](text/text_generation/unified_transformer-12L-cn-luge)|Unified Transformer|千言对话数据集|人机多轮对话|
|[reading_pictures_writing_poems](text/text_generation/reading_pictures_writing_poems)|多网络级联|-|看图写诗|
|[GPT2_CPM_LM](text/text_generation/GPT2_CPM_LM)|||问答类文本生成|
|[GPT2_Base_CN](text/text_generation/GPT2_Base_CN)|||问答类文本生成|
|[ernie_gen](text/text_generation/ernie_gen)|ERNIE-GEN|-|Pre-training finetuning framework for generating tasks|
|[ernie_gen_poetry](text/text_generation/ernie_gen_poetry)|ERNIE-GEN|Open source poetry dataset|Poetry generation|
|[ernie_gen_couplet](text/text_generation/ernie_gen_couplet)|ERNIE-GEN|Open source couplet dataset|Couplet generation|
|[ernie_gen_lover_words](text/text_generation/ernie_gen_lover_words)|ERNIE-GEN|Online love poems and love talk data|Love word generation|
|[ernie_tiny_couplet](text/text_generation/ernie_tiny_couplet)|Eernie_tiny|Open source couplet dataset|Couplet generation|
|[ernie_gen_acrostic_poetry](text/text_generation/ernie_gen_acrostic_poetry)|ERNIE-GEN|Open source poetry dataset|Acrostic poetry Generation|
|[Rumor_prediction](text/text_generation/Rumor_prediction)|-|Sina Weibo Chinese rumor data|Rumor prediction|
|[plato-mini](text/text_generation/plato-mini)|Unified Transformer|Billion level Chinese conversation data|Chinese dialogue|
|[plato2_en_large](text/text_generation/plato2_en_large)|plato2|Open domain multi round dataset|Super large scale generative dialogue|
|[plato2_en_base](text/text_generation/plato2_en_base)|plato2|Open domain multi round dataset|Super large scale generative dialogue|
|[CPM_LM](text/text_generation/CPM_LM)|GPT-2|Self built dataset|Chinese text generation|
|[unified_transformer-12L-cn](text/text_generation/unified_transformer-12L-cn)|Unified Transformer|Ten million level Chinese conversation data|Man machine multi round dialogue|
|[unified_transformer-12L-cn-luge](text/text_generation/unified_transformer-12L-cn-luge)|Unified Transformer|dialogue dataset|Man machine multi round dialogue|
|[reading_pictures_writing_poems](text/text_generation/reading_pictures_writing_poems)|Multi network cascade|-|Look at pictures and write poems|
|[GPT2_CPM_LM](text/text_generation/GPT2_CPM_LM)|||Q&A text generation|
|[GPT2_Base_CN](text/text_generation/GPT2_Base_CN)|||Q&A text generation|
- ### Word Embedding
......@@ -316,7 +317,7 @@ English | [简体中文](README_ch.md)
|[w2v_weibo_target_word-bigram_dim300](text/embedding/w2v_weibo_target_word-bigram_dim300)|w2v|weibo||
|[w2v_baidu_encyclopedia_target_word-ngram_1-2_dim300](text/embedding/w2v_baidu_encyclopedia_target_word-ngram_1-2_dim300)|w2v|baidu_encyclopedia||
|[w2v_literature_target_word-word_dim300](text/embedding/w2v_literature_target_word-word_dim300)|w2v|literature||
|[word2vec_skipgram](text/embedding/word2vec_skipgram)|skip-gram|百度自建数据集||
|[word2vec_skipgram](text/embedding/word2vec_skipgram)|skip-gram|Baidu self built dataset||
|[w2v_sogou_target_word-char_dim300](text/embedding/w2v_sogou_target_word-char_dim300)|w2v|sogou||
|[w2v_weibo_target_bigram-char_dim300](text/embedding/w2v_weibo_target_bigram-char_dim300)|w2v|weibo||
|[w2v_zhihu_target_word-bigram_dim300](text/embedding/w2v_zhihu_target_word-bigram_dim300)|w2v|zhihu||
......@@ -393,16 +394,16 @@ English | [简体中文](README_ch.md)
|--|--|--|--|
|[chinese_electra_small](text/language_model/chinese_electra_small)||||
|[chinese_electra_base](text/language_model/chinese_electra_base)||||
|[roberta-wwm-ext-large](text/language_model/roberta-wwm-ext-large)|roberta-wwm-ext-large|百度自建数据集||
|[chinese-bert-wwm-ext](text/language_model/chinese_bert_wwm_ext)|chinese-bert-wwm-ext|百度自建数据集||
|[lda_webpage](text/language_model/lda_webpage)|LDA|百度自建网页领域数据集||
|[roberta-wwm-ext-large](text/language_model/roberta-wwm-ext-large)|roberta-wwm-ext-large|Baidu self built dataset||
|[chinese-bert-wwm-ext](text/language_model/chinese_bert_wwm_ext)|chinese-bert-wwm-ext|Baidu self built dataset||
|[lda_webpage](text/language_model/lda_webpage)|LDA|Baidu Self built Web Page Domain Dataset||
|[lda_novel](text/language_model/lda_novel)||||
|[bert-base-multilingual-uncased](text/language_model/bert-base-multilingual-uncased)||||
|[rbt3](text/language_model/rbt3)||||
|[ernie_v2_eng_base](text/language_model/ernie_v2_eng_base)|ernie_v2_eng_base|百度自建数据集||
|[ernie_v2_eng_base](text/language_model/ernie_v2_eng_base)|ernie_v2_eng_base|Baidu self built dataset||
|[bert-base-multilingual-cased](text/language_model/bert-base-multilingual-cased)||||
|[rbtl3](text/language_model/rbtl3)||||
|[chinese-bert-wwm](text/language_model/chinese_bert_wwm)|chinese-bert-wwm|百度自建数据集||
|[chinese-bert-wwm](text/language_model/chinese_bert_wwm)|chinese-bert-wwm|Baidu self built dataset||
|[bert-large-uncased](text/language_model/bert-large-uncased)||||
|[slda_novel](text/language_model/slda_novel)||||
|[slda_news](text/language_model/slda_news)||||
......@@ -410,16 +411,16 @@ English | [简体中文](README_ch.md)
|[slda_webpage](text/language_model/slda_webpage)||||
|[bert-base-cased](text/language_model/bert-base-cased)||||
|[slda_weibo](text/language_model/slda_weibo)||||
|[roberta-wwm-ext](text/language_model/roberta-wwm-ext)|roberta-wwm-ext|百度自建数据集||
|[roberta-wwm-ext](text/language_model/roberta-wwm-ext)|roberta-wwm-ext|Baidu self built dataset||
|[bert-base-uncased](text/language_model/bert-base-uncased)||||
|[electra_large](text/language_model/electra_large)||||
|[ernie](text/language_model/ernie)|ernie-1.0|百度自建数据集||
|[simnet_bow](text/language_model/simnet_bow)|BOW|百度自建数据集||
|[ernie_tiny](text/language_model/ernie_tiny)|ernie_tiny|百度自建数据集||
|[bert-base-chinese](text/language_model/bert-base-chinese)|bert-base-chinese|百度自建数据集||
|[lda_news](text/language_model/lda_news)|LDA|百度自建新闻领域数据集||
|[ernie](text/language_model/ernie)|ernie-1.0|Baidu self built dataset||
|[simnet_bow](text/language_model/simnet_bow)|BOW|Baidu self built dataset||
|[ernie_tiny](text/language_model/ernie_tiny)|ernie_tiny|Baidu self built dataset||
|[bert-base-chinese](text/language_model/bert-base-chinese)|bert-base-chinese|Baidu self built dataset||
|[lda_news](text/language_model/lda_news)|LDA|Baidu Self built News Field Dataset||
|[electra_base](text/language_model/electra_base)||||
|[ernie_v2_eng_large](text/language_model/ernie_v2_eng_large)|ernie_v2_eng_large|百度自建数据集||
|[ernie_v2_eng_large](text/language_model/ernie_v2_eng_large)|ernie_v2_eng_large|Baidu self built dataset||
|[bert-large-cased](text/language_model/bert-large-cased)||||
</div></details>
......@@ -429,51 +430,51 @@ English | [简体中文](README_ch.md)
|module|Network|Dataset|Introduction|Huggingface Spaces Demo|
|--|--|--|--|--|
|[ernie_skep_sentiment_analysis](text/sentiment_analysis/ernie_skep_sentiment_analysis)|SKEP|百度自建数据集|句子级情感分析|
|[emotion_detection_textcnn](text/sentiment_analysis/emotion_detection_textcnn)|TextCNN|百度自建数据集|对话情绪识别|
|[senta_bilstm](text/sentiment_analysis/senta_bilstm)|BiLSTM|百度自建数据集|中文情感倾向分析| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/senta_bilstm)
|[senta_bow](text/sentiment_analysis/senta_bow)|BOW|百度自建数据集|中文情感倾向分析|
|[senta_gru](text/sentiment_analysis/senta_gru)|GRU|百度自建数据集|中文情感倾向分析|
|[senta_lstm](text/sentiment_analysis/senta_lstm)|LSTM|百度自建数据集|中文情感倾向分析|
|[senta_cnn](text/sentiment_analysis/senta_cnn)|CNN|百度自建数据集|中文情感倾向分析|
|[ernie_skep_sentiment_analysis](text/sentiment_analysis/ernie_skep_sentiment_analysis)|SKEP|Baidu self built dataset|Sentence level sentiment analysis|
|[emotion_detection_textcnn](text/sentiment_analysis/emotion_detection_textcnn)|TextCNN|Baidu self built dataset|Dialogue emotion detection|
|[senta_bilstm](text/sentiment_analysis/senta_bilstm)|BiLSTM|Baidu self built dataset|Chinesesentiment analysis| [![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/senta_bilstm)
|[senta_bow](text/sentiment_analysis/senta_bow)|BOW|Baidu self built dataset|Chinese sentiment analysis|
|[senta_gru](text/sentiment_analysis/senta_gru)|GRU|Baidu self built dataset|Chinese sentiment analysis|
|[senta_lstm](text/sentiment_analysis/senta_lstm)|LSTM|Baidu self built dataset|Chinese sentiment analysis|
|[senta_cnn](text/sentiment_analysis/senta_cnn)|CNN|Baidu self built dataset|Chinese sentiment analysis|
- ### Syntactic Analysis
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[DDParser](text/syntactic_analysis/DDParser)|Deep Biaffine Attention|搜索query、网页文本、语音输入等数据|句法分析|
|[DDParser](text/syntactic_analysis/DDParser)|Deep Biaffine Attention|Search query, web text, voice input and other data|Syntactic analysis|
- ### Simultaneous Translation
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[transformer_nist_wait_1](text/simultaneous_translation/stacl/transformer_nist_wait_1)|transformer|NIST 2008-中英翻译数据集|中译英-wait-1策略|
|[transformer_nist_wait_3](text/simultaneous_translation/stacl/transformer_nist_wait_3)|transformer|NIST 2008-中英翻译数据集|中译英-wait-3策略|
|[transformer_nist_wait_5](text/simultaneous_translation/stacl/transformer_nist_wait_5)|transformer|NIST 2008-中英翻译数据集|中译英-wait-5策略|
|[transformer_nist_wait_7](text/simultaneous_translation/stacl/transformer_nist_wait_7)|transformer|NIST 2008-中英翻译数据集|中译英-wait-7策略|
|[transformer_nist_wait_all](text/simultaneous_translation/stacl/transformer_nist_wait_all)|transformer|NIST 2008-中英翻译数据集|中译英-waitk=-1策略|
|[transformer_nist_wait_1](text/simultaneous_translation/stacl/transformer_nist_wait_1)|transformer|NIST 2008|Chinese to English - wait-1|
|[transformer_nist_wait_3](text/simultaneous_translation/stacl/transformer_nist_wait_3)|transformer|NIST 2008|Chinese to English - wait-3|
|[transformer_nist_wait_5](text/simultaneous_translation/stacl/transformer_nist_wait_5)|transformer|NIST 2008|Chinese to English - wait-5|
|[transformer_nist_wait_7](text/simultaneous_translation/stacl/transformer_nist_wait_7)|transformer|NIST 2008|Chinese to English - wait-7|
|[transformer_nist_wait_all](text/simultaneous_translation/stacl/transformer_nist_wait_all)|transformer|NIST 2008|Chinese to English - waitk=-1|
- ### Lexical Analysis
|module|Network|Dataset|Introduction|Huggingface Spaces Demo|
|--|--|--|--|--|
|[jieba_paddle](text/lexical_analysis/jieba_paddle)|BiGRU+CRF|百度自建数据集|jieba使用Paddle搭建的切词网络(双向GRU)。同时支持jieba的传统切词方法,如精确模式、全模式、搜索引擎模式等切词模式。|
|[lac](text/lexical_analysis/lac)|BiGRU+CRF|百度自建数据集|百度自研联合的词法分析模型,能整体性地完成中文分词、词性标注、专名识别任务。在百度自建数据集上评测,LAC效果:Precision=88.0%,Recall=88.7%,F1-Score=88.4%。|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/lac)
|[jieba_paddle](text/lexical_analysis/jieba_paddle)|BiGRU+CRF|Baidu self built dataset|Jieba uses Paddle to build a word segmentation network (two-way GRU). At the same time, it supports traditional word segmentation methods of jieba, such as precise mode, full mode, search engine mode, etc.|
|[lac](text/lexical_analysis/lac)|BiGRU+CRF|Baidu self built dataset|The lexical analysis model jointly developed by Baidu can complete the tasks of Chinese word segmentation, part of speech tagging and proper name recognition as a whole. Evaluated on Baidu self built dataset, LAC effect: Precision=88.0%, Recall=88.7%, F1 Score=88.4%.|[![Hugging Face Spaces](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/PaddlePaddle/lac)
- ### Punctuation Restoration
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[auto_punc](text/punctuation_restoration/auto_punc)|Ernie-1.0|WuDaoCorpora 2.0|自动添加7种标点符号|
|[auto_punc](text/punctuation_restoration/auto_punc)|Ernie-1.0|WuDaoCorpora 2.0|Automatically add 7 punctuation marks|
- ### Text Review
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[porn_detection_cnn](text/text_review/porn_detection_cnn)|CNN|百度自建数据集|色情检测,自动判别文本是否涉黄并给出相应的置信度,对文本中的色情描述、低俗交友、污秽文案进行识别|
|[porn_detection_gru](text/text_review/porn_detection_gru)|GRU|百度自建数据集|色情检测,自动判别文本是否涉黄并给出相应的置信度,对文本中的色情描述、低俗交友、污秽文案进行识别|
|[porn_detection_lstm](text/text_review/porn_detection_lstm)|LSTM|百度自建数据集|色情检测,自动判别文本是否涉黄并给出相应的置信度,对文本中的色情描述、低俗交友、污秽文案进行识别|
|[porn_detection_cnn](text/text_review/porn_detection_cnn)|CNN|Baidu self built dataset|Pornography detection, automatically identify whether the text is pornographic and give the corresponding confidence, and identify pornographic descriptions, vulgar friends, and dirty documents in the text|
|[porn_detection_gru](text/text_review/porn_detection_gru)|GRU|Baidu self built dataset|Pornography detection, automatically identify whether the text is pornographic and give the corresponding confidence, and identify pornographic descriptions, vulgar friends, and dirty documents in the text|
|[porn_detection_lstm](text/text_review/porn_detection_lstm)|LSTM|Baidu self built dataset|Pornography detection, automatically identify whether the text is pornographic and give the corresponding confidence, and identify pornographic descriptions, vulgar friends, and dirty documents in the text|
## Audio
......@@ -481,62 +482,62 @@ English | [简体中文](README_ch.md)
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[ge2e_fastspeech2_pwgan](audio/voice_cloning/ge2e_fastspeech2_pwgan)|FastSpeech2|AISHELL-3|中文语音克隆|
|[lstm_tacotron2](audio/voice_cloning/lstm_tacotron2)|LSTM、Tacotron2、WaveFlow|AISHELL-3|中文语音克隆|
|[ge2e_fastspeech2_pwgan](audio/voice_cloning/ge2e_fastspeech2_pwgan)|FastSpeech2|AISHELL-3|Chinese speech cloning|
|[lstm_tacotron2](audio/voice_cloning/lstm_tacotron2)|LSTM、Tacotron2、WaveFlow|AISHELL-3|Chinese speech cloning|
- ### Text to Speech
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[transformer_tts_ljspeech](audio/tts/transformer_tts_ljspeech)|Transformer|LJSpeech-1.1|英文语音合成|
|[fastspeech_ljspeech](audio/tts/fastspeech_ljspeech)|FastSpeech|LJSpeech-1.1|英文语音合成|
|[fastspeech2_baker](audio/tts/fastspeech2_baker)|FastSpeech2|Chinese Standard Mandarin Speech Copus|中文语音合成|
|[fastspeech2_ljspeech](audio/tts/fastspeech2_ljspeech)|FastSpeech2|LJSpeech-1.1|英文语音合成|
|[deepvoice3_ljspeech](audio/tts/deepvoice3_ljspeech)|DeepVoice3|LJSpeech-1.1|英文语音合成|
|[transformer_tts_ljspeech](audio/tts/transformer_tts_ljspeech)|Transformer|LJSpeech-1.1|English speech synthesis|
|[fastspeech_ljspeech](audio/tts/fastspeech_ljspeech)|FastSpeech|LJSpeech-1.1|English speech synthesis|
|[fastspeech2_baker](audio/tts/fastspeech2_baker)|FastSpeech2|Chinese Standard Mandarin Speech Copus|Chinese speech synthesis|
|[fastspeech2_ljspeech](audio/tts/fastspeech2_ljspeech)|FastSpeech2|LJSpeech-1.1|English speech synthesis|
|[deepvoice3_ljspeech](audio/tts/deepvoice3_ljspeech)|DeepVoice3|LJSpeech-1.1|English speech synthesis|
- ### Automatic Speech Recognition
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[deepspeech2_aishell](audio/asr/deepspeech2_aishell)|DeepSpeech2|AISHELL-1|中文语音识别|
|[deepspeech2_librispeech](audio/asr/deepspeech2_librispeech)|DeepSpeech2|LibriSpeech|英文语音识别|
|[u2_conformer_aishell](audio/asr/u2_conformer_aishell)|Conformer|AISHELL-1|中文语音识别|
|[u2_conformer_wenetspeech](audio/asr/u2_conformer_wenetspeech)|Conformer|WenetSpeech|中文语音识别|
|[u2_conformer_librispeech](audio/asr/u2_conformer_librispeech)|Conformer|LibriSpeech|英文语音识别|
|[deepspeech2_aishell](audio/asr/deepspeech2_aishell)|DeepSpeech2|AISHELL-1|Chinese Speech Recognition|
|[deepspeech2_librispeech](audio/asr/deepspeech2_librispeech)|DeepSpeech2|LibriSpeech|English Speech Recognition|
|[u2_conformer_aishell](audio/asr/u2_conformer_aishell)|Conformer|AISHELL-1|Chinese Speech Recognition|
|[u2_conformer_wenetspeech](audio/asr/u2_conformer_wenetspeech)|Conformer|WenetSpeech|Chinese Speech Recognition|
|[u2_conformer_librispeech](audio/asr/u2_conformer_librispeech)|Conformer|LibriSpeech|English Speech Recognition|
- ### Audio Classification
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[panns_cnn6](audio/audio_classification/PANNs/cnn6)|PANNs|Google Audioset|主要包含4个卷积层和2个全连接层,模型参数为4.5M。经过预训练后,可以用于提取音频的embbedding,维度是512|
|[panns_cnn14](audio/audio_classification/PANNs/cnn14)|PANNs|Google Audioset|主要包含12个卷积层和2个全连接层,模型参数为79.6M。经过预训练后,可以用于提取音频的embbedding,维度是2048|
|[panns_cnn10](audio/audio_classification/PANNs/cnn10)|PANNs|Google Audioset|主要包含8个卷积层和2个全连接层,模型参数为4.9M。经过预训练后,可以用于提取音频的embbedding,维度是512|
|[panns_cnn6](audio/audio_classification/PANNs/cnn6)|PANNs|Google Audioset|It mainly includes 4 convolution layers and 2 full connection layers, and the model parameter is 4.5M. After pre-training, it can be used to extract the embbedding of audio. The dimension is 512|
|[panns_cnn14](audio/audio_classification/PANNs/cnn14)|PANNs|Google Audioset|It mainly includes 4 convolution layers and 2 full connection layers, and the model parameter is 4.5M. After pre-training, it can be used to extract the embbedding of audio. The dimension is 2048|
|[panns_cnn10](audio/audio_classification/PANNs/cnn10)|PANNs|Google Audioset|It mainly includes 4 convolution layers and 2 full connection layers, and the model parameter is 4.5M. After pre-training, it can be used to extract the embbedding of audio. The dimension is 512|
## Video
- ### Video Classification
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[videotag_tsn_lstm](video/classification/videotag_tsn_lstm)|TSN + AttentionLSTM|百度自建数据集|大规模短视频分类打标签|
|[tsn_kinetics400](video/classification/tsn_kinetics400)|TSN|Kinetics-400|视频分类|
|[tsm_kinetics400](video/classification/tsm_kinetics400)|TSM|Kinetics-400|视频分类|
|[stnet_kinetics400](video/classification/stnet_kinetics400)|StNet|Kinetics-400|视频分类|
|[nonlocal_kinetics400](video/classification/nonlocal_kinetics400)|Non-local|Kinetics-400|视频分类|
|[videotag_tsn_lstm](video/classification/videotag_tsn_lstm)|TSN + AttentionLSTM|Baidu self built dataset|Short-video classification|
|[tsn_kinetics400](video/classification/tsn_kinetics400)|TSN|Kinetics-400|Video classification|
|[tsm_kinetics400](video/classification/tsm_kinetics400)|TSM|Kinetics-400|Video classification|
|[stnet_kinetics400](video/classification/stnet_kinetics400)|StNet|Kinetics-400|Video classification|
|[nonlocal_kinetics400](video/classification/nonlocal_kinetics400)|Non-local|Kinetics-400|Video classification|
- ### Video Editing
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[SkyAR](video/Video_editing/SkyAR)|UNet|UNet|视频换天|
|[SkyAR](video/Video_editing/SkyAR)|UNet|UNet|Video sky Replacement|
- ### Multiple Object tracking
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[fairmot_dla34](video/multiple_object_tracking/fairmot_dla34)|CenterNet|Caltech Pedestrian+CityPersons+CUHK-SYSU+PRW+ETHZ+MOT17|实时多目标跟踪|
|[jde_darknet53](video/multiple_object_tracking/jde_darknet53)|YOLOv3|Caltech Pedestrian+CityPersons+CUHK-SYSU+PRW+ETHZ+MOT17|多目标跟踪-兼顾精度和速度|
|[fairmot_dla34](video/multiple_object_tracking/fairmot_dla34)|CenterNet|Caltech Pedestrian+CityPersons+CUHK-SYSU+PRW+ETHZ+MOT17|Realtime multiple object tracking|
|[jde_darknet53](video/multiple_object_tracking/jde_darknet53)|YOLOv3|Caltech Pedestrian+CityPersons+CUHK-SYSU+PRW+ETHZ+MOT17|object tracking with both accuracy and speed|
## Industrial Application
......@@ -544,4 +545,4 @@ English | [简体中文](README_ch.md)
|module|Network|Dataset|Introduction|
|--|--|--|--|
|[WatermeterSegmentation](image/semantic_segmentation/WatermeterSegmentation)|DeepLabV3|水表的数字表盘分割数据集|水表的数字表盘分割|
|[WatermeterSegmentation](image/semantic_segmentation/WatermeterSegmentation)|DeepLabV3|Water meter dataset|Water meter segmentation|
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册