- input (numpy.ndarray|str): Image data,numpy.ndarray or str. ndarray.shape is in the format [H, W, C], BGR.
- model_select (list\[str\]): Mode selection,\['Colorization'\] only colorize the input image, \['SuperResolution'\] only increase the image resolution;
default is \['Colorization', 'SuperResolution'\]。
- save_path (str): Save path, default is 'photo_restoration'.
- **Return**
- output (numpy.ndarray): Restoration result,ndarray.shape is in the format [H, W, C], BGR.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of photo restoration.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m photo_restoration
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- User_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install user_guided_colorization
```
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run user_guided_colorization --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='user_guided_colorization')
model.set_config(prob=0.1)
result = model.predict(images=['/PATH/TO/IMAGE'])
```
- ### 3.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the user_guided_colorization model to fine-tune datasets such as [Canvas](../../docs/reference/datasets.md#class-hubdatasetsCanvas) by executing `python train.py`.
- `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `hub.datasets.Canvas()`: The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name='user_guided_colorization', load_checkpoint=None)
model.set_config(classification=True, prob=1)
```
* `name`: Model name.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
* `classification`: The model is trained by two mode. At the beginning, `classification` is set to True, which is used for shallow network training. In the later stage of training, set `classification` to False, which is used to train the output layer of the network.
* `prob`: The probability that a priori color block is not added to each input image, the default is 1, that is, no prior color block is added. For example, when `prob` is set to 0.9, the probability that there are two a priori color blocks on a picture is(1-0.9)*(1-0.9)*0.9=0.009.
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model.
* `optimizer`: Optimizer selection.
* `use_vdl`: Whether to use vdl to visualize the training process.
* `checkpoint_dir`: The storage address of the model parameters.
* `compare_metrics`: The measurement index of the optimal model.
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset.
* `epochs`: Epochs of training process.
* `batch_size`: Batch size.
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset.
* `log_interval`:The interval for printing logs.
* `save_interval`: The interval for saving model parameters.
- Model prediction
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='user_guided_colorization', load_checkpoint='/PATH/TO/CHECKPOINT')
model.set_config(prob=0.1)
result = model.predict(images=['/PATH/TO/IMAGE'])
```
- **NOTE:** If you want to get the oil painting style, please download the parameter file [Canvas colorization](https://paddlehub.bj.bcebos.com/dygraph/models/canvas_rc.pdparams)
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of colorization.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m user_guided_colorization
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- DCSCN is a super resolution model based on 'Fast and Accurate Image Super Resolution by Deep CNN with Skip Connection and Network in Network'. The model uses residual structure and skip connections to extract local and global features. It uses a parallel 1*1 convolutional network to learn detailed features to improve model performance. This model provides super resolution result with scale factor x2.
- For more information, please refer to: [dcscn](https://github.com/jiny2001/dcscn-super-resolution)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install dcscn
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
-```python
import cv2
import paddlehub as hub
sr_model = hub.Module(name='dcscn')
im = cv2.imread('/PATH/TO/IMAGE').astype('float32')
res = sr_model.reconstruct(images=[im], visualization=True)
print(res[0]['data'])
sr_model.save_inference_model()
```
- ### 3、API
-```python
def reconstruct(self,
images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="dcscn_output")
```
- Prediction API.
- **Parameter**
* images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path.
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): Save path of images, "dcscn_output" by default.
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): Result of super resolution.
-```python
def save_inference_model(self,
dirname='dcscn_save_model',
model_filename=None,
params_filename=None,
combined=False)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of super resolution.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m dcscn
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- Falsr_a is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to: [falsr_a](https://github.com/xiaomi-automl/FALSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install falsr_a
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
-```python
import cv2
import paddlehub as hub
sr_model = hub.Module(name='falsr_a')
im = cv2.imread('/PATH/TO/IMAGE').astype('float32')
res = sr_model.reconstruct(images=[im], visualization=True)
print(res[0]['data'])
sr_model.save_inference_model()
```
- ### 3、API
-```python
def reconstruct(self,
images=None,
paths=None,
use_gpu=False,
visualization=False,
output_dir="falsr_a_output")
```
- Prediction API.
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path.
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): save path of images, "dcscn_output" by default.
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
-```python
def save_inference_model(self,
dirname='falsr_a_save_model',
model_filename=None,
params_filename=None,
combined=False)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of super resolution.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m falsr_a
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- Falsr_b is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_b](https://github.com/xiaomi-automl/FALSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install falsr_b
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- Falsr_c is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_c](https://github.com/xiaomi-automl/FALSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install falsr_c
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- Realsr is a super resolution model for image and video based on "Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Mode". This model provides super resolution result with scale factor x4.
- For more information, please refer to: [realsr](https://github.com/csjcai/RealSR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- **NOTE**: This Module relies on ffmpeg, Please install ffmpeg before using this Module.
The image attributes are: original image, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Gender, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Aged<br/>
</p>
- ### Module Introduction
- AttGAN is a Generative Adversarial Network, which uses classification loss and reconstruction loss to train the network. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
-```shell
$ hub install attgan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run attgan_celeba --image "/PATH/TO/IMAGE" --style "target_attribute"
```
-**Parameters**
- image: Input image path.
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- CycleGAN belongs to Generative Adversarial Networks(GANs). Unlike traditional GANs that can only generate pictures in one direction, CycleGAN can simultaneously complete the style transfer of two domains. The PaddleHub Module is trained by Cityscapes dataset, and supports the conversion from real images to semantic segmentation results, and also supports conversion from semantic segmentation results to real images.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.4.0
- paddlehub >= 1.1.0
- ### 2、Installation
-```shell
$ hub install cyclegan_cityscapes==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run cyclegan_cityscapes --input_path "/PATH/TO/IMAGE"
```
-**Parameters**
- input_path: image path
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
-```python
import paddlehub as hub
cyclegan = hub.Module(name="cyclegan_cityscapes")
test_img_path = "/PATH/TO/IMAGE"
# set input dict
input_dict = {"image": [test_img_path]}
# execute predict and print the result
results = cyclegan.generate(data=input_dict)
print(results)
```
- ### 3、API
-```python
def generate(data)
```
- Style transfer API.
- **Parameters**
- data(list[dict]): Each element in the list is dict and each field is:
- image (list\[str\]): Image path.
- **Return**
- res (list\[str\]): The list of style transfer results, where each element is dict and each field is:
- STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
-```shell
$ hub install stargan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run stargan_celeba --image "/PATH/TO/IMAGE" --style "target_attribute"
```
- **Parameters**
- image: image path
- style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
The image attributes are: original image, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Gender, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Aged<br/>
</p>
- ### Module Introduction
- STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.5.2
- paddlehub >= 1.0.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
-```shell
$ hub install stgan_celeba==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run stgan_celeba --image "/PATH/TO/IMAGE" --info "original_attributes" --style "target_attribute"
```
-**Parameters**
- image: Image path
- info: Attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair".
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- AnimeGAN V2 image style stransfer model, the model can convert the input image into red pepper anime style, the model weight is converted from[AnimeGAN V2 official repo](https://github.com/TachibanaYoshino/AnimeGAN)。
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_en/get_start/installation.rst)
- ### 2、Installation
-```shell
$ hub install animegan_v2_paprika_54
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- AnimeGAN V2 image style stransfer model, the model can convert the input image into red pepper anime style, the model weight is converted from[AnimeGAN V2 official repo](https://github.com/TachibanaYoshino/AnimeGAN)。
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 1.8.0
- paddlehub >= 1.8.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- ### 2、Installation
-```shell
$ hub install animegan_v2_paprika_97
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run msgnet --input_path "/PATH/TO/ORIGIN/IMAGE" --style_path "/PATH/TO/STYLE/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='msgnet')
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
```
- ### 3.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the msgnet model to fine-tune datasets such as [MiniCOCO](../../docs/reference/datasets.md#class-hubdatasetsMiniCOCO) by executing `python train.py`.
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
- Dataset preparation can be referred to [minicoco.py](../../paddlehub/datasets/minicoco.py). `hub.datasets.MiniCOCO()` will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name='msgnet', load_checkpoint=None)
```
* `name`: model name.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='msgnet', load_checkpoint="/PATH/TO/CHECKPOINT")
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
```
- **Parameters**
* `origin`: Image path or ndarray data with format [H, W, C], BGR.
* `style`: Style image path.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'style_tranfer'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m msgnet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- ResNet-vd is a variant of ResNet, which can be used for image classification and feature extraction. This module is trained by Baidu self-built animal data set and supports the classification and recognition of 7,978 animal species.
- For more information, please refer to [ResNet-vd](https://arxiv.org/pdf/1812.01187.pdf)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
-```shell
$ hub install resnet50_vd_animals
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run resnet50_vd_animals --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
$ hub run resnet50_vd_imagenet_ssld --input_path "/PATH/TO/IMAGE" --top_k 5
```
- ### 2、Prediction Code Example
```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='resnet50_vd_imagenet_ssld')
result = model.predict(['/PATH/TO/IMAGE'])
```
- ### 3.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the user_guided_colorization model to fine-tune datasets such as [Flowers](../../docs/reference/datasets.md#class-hubdatasetsflowers) by excuting `python train.py`.
- `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `hub.datasets.Flowers()` will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name="resnet50_vd_imagenet_ssld", label_list=["roses", "tulips", "daisy", "sunflowers", "dandelion"])
```
* `name`: model name.
* `label_list`: set the output classification category. Default is Imagenet2012 category.
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model.
* `optimizer`: Optimizer selection.
* `use_vdl`: Whether to use vdl to visualize the training process.
* `checkpoint_dir`: The storage address of the model parameters.
* `compare_metrics`: The measurement index of the optimal model.
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset.
* `epochs`: Epochs of training process.
* `batch_size`: Batch size.
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset.
* `log_interval`:The interval for printing logs.
* `save_interval`: The interval for saving model parameters.
- Model prediction
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='resnet50_vd_imagenet_ssld', label_list=["roses", "tulips", "daisy", "sunflowers", "dandelion"], load_checkpoint='/PATH/TO/CHECKPOINT')
result = model.predict(['/PATH/TO/IMAGE'])
```
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m resnet50_vd_imagenet_ssld
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
$ hub run resnet_v2_50_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- Pneumonia CT analysis model (Pneumonia-CT-LKM-PP) can efficiently complete the detection of lesions and outline the patient's CT images. Through post-processing codes, the number, volume, and lesions of lung lesions can be analyzed. This model has been fully trained by high-resolution and low-resolution CT image data, which can adapt to the examination data collected by different levels of CT imaging equipment.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install Pneumonia_CT_LKM_PP==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- Pneumonia CT analysis model (Pneumonia-CT-LKM-PP) can efficiently complete the detection of lesions and outline the patient's CT images. Through post-processing codes, the number, volume, and lesions of lung lesions can be analyzed. This model has been fully trained by high-resolution and low-resolution CT image data, which can adapt to the examination data collected by different levels of CT imaging equipment. (This module is a submodule of Pneumonia_CT_LKM_PP.)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install Pneumonia_CT_LKM_PP_lung==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- Human Parsing is a fine-grained semantic segmentation task that aims to identify the components (for example, body parts and clothing) of a human image at the pixel level. The PaddleHub Module uses ResNet101 as the backbone network, and accepts input image sizes of 473x473x3.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install ace2p
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
-```python
import paddlehub as hub
import cv2
human_parser = hub.Module(name="ace2p")
result = human_parser.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')])
```
- ### 3、API
-```python
def segmentation(images=None,
paths=None,
batch_size=1,
use_gpu=False,
output_dir='ace2p_output',
visualization=False):
```
- Prediction API, used for human parsing.
- **Parameter**
* images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): Image path.
* batch\_size (int): Batch size.
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* output\_dir (str): Save path of output, default is 'ace2p_output'.
* visualization (bool): Whether to save the recognition results as picture files.
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
-```python
def save_inference_model(dirname,
model_filename=None,
params_filename=None,
combined=True)
```
- Save the model to the specified path.
- **Parameters**
* dirname: Save path.
* model\_filename: mMdel file name,defalt is \_\_model\_\_
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of human parsing
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
-```shell
$ hub serving start -m ace2p
```
- The servitization API is now deployed and the default port number is 8866.
-**NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
hub run deeplabv3p_xception65_humanseg --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- HumanSeg_lite is based on ShuffleNetV2 network. The network size is only 541K. It is suitable for selfie portrait segmentation and can be segmented in real time on the mobile terminal.
- For more information, please refer to:[humanseg_lite](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.2/contrib/HumanSeg)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install humanseg_lite
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
hub run humanseg_lite --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- Image segmentation and video segmentation example:
-```python
import cv2
import paddlehub as hub
human_seg = hub.Module(name='humanseg_lite')
im = cv2.imread('/PATH/TO/IMAGE')
res = human_seg.segment(images=[im],visualization=True)
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path.
* batch\_size (int): batch size.
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* visualization (bool): Whether to save the results as picture files.
* output\_dir (str): save path of images, humanseg_lite_output by default.
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
- ```python
def video_stream_segment(self,
frame_org,
frame_id,
prev_gray,
prev_cfd,
use_gpu=False):
```
- Prediction API, used to segment video portraits frame by frame.
- **Parameter**
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR.
* frame_id (int): The number of the current frame.
* prev_gray (numpy.ndarray): Grayscale image of the previous network input.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- **Return**
* img_matting (numpy.ndarray): The result of portrait segmentation.
* cur_gray (numpy.ndarray): Grayscale image of the current network input.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
- ```python
def video_segment(self,
video_path=None,
use_gpu=False,
save_dir='humanseg_lite_video_result'):
```
- Prediction API to produce video segmentation result.
- **Parameter**
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- HumanSeg_mobile is based on HRNet_w18_small_v1 network. The network size is only 5.8M. It is suitable for selfie portrait segmentation and can be segmented in real time on the mobile terminal.
- For more information, please refer to:[humanseg_mobile](https://github.com/PaddlePaddle/PaddleSeg/tree/release/2.2/contrib/HumanSeg)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install humanseg_mobile
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
hub run humanseg_mobile --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- Image segmentation and video segmentation example:
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path.
* batch\_size (int): batch size.
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* visualization (bool): Whether to save the results as picture files.
* output\_dir (str): save path of images, humanseg_mobile_output by default.
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
```python
def video_stream_segment(self,
frame_org,
frame_id,
prev_gray,
prev_cfd,
use_gpu=False):
```
- Prediction API, used to segment video portraits frame by frame.
- **Parameter**
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR.
* frame_id (int): The number of the current frame.
* prev_gray (numpy.ndarray): Grayscale image of the previous network input.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- **Return**
* img_matting (numpy.ndarray): The result of portrait segmentation.
* cur_gray (numpy.ndarray): Grayscale image of the current network input.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
```python
def video_segment(self,
video_path=None,
use_gpu=False,
save_dir='humanseg_mobile_video_result'):
```
- Prediction API to produce video segmentation result.
- **Parameter**
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- SkyAR is based on [Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos](https://arxiv.org/abs/2010.11800). It mainly consists of three parts: sky matting network, motion estimation and image fusion.
- For more information, please refer to:[SkyAR](https://github.com/jiupinjia/SkyAR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$hub install SkyAR
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)