- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the msgnet model to fine-tune datasets such as [MiniCOCO](../../docs/reference/datasets.md#class-hubdatasetsMiniCOCO) by executing `python train.py`.
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
- Dataset preparation can be referred to [minicoco.py](../../paddlehub/datasets/minicoco.py). `hub.datasets.MiniCOCO()` will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name='msgnet', load_checkpoint=None)
```
* `name`: model name.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='msgnet', load_checkpoint="/PATH/TO/CHECKPOINT")
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
```
- **Args**
* `origin`: Image path or ndarray data with format [H, W, C], BGR;
* `style`: Style image path;
* `visualization`: Whether to save the recognition results as picture files;
* `save_path`: Save path of the result, default is 'style_tranfer'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of style transfer.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m msgnet
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- ResNet-vd is a variant of ResNet, which can be used for image classification and feature extraction. This module is trained by Baidu self-built animal data set and supports the classification and recognition of 7,978 animal species.
- For more information, please refer to [ResNet-vd](https://arxiv.org/pdf/1812.01187.pdf)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
-```shell
$ hub install resnet50_vd_animals
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
$ hub run resnet50_vd_animals --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
$ hub run resnet50_vd_imagenet_ssld --input_path "/PATH/TO/IMAGE" --top_k 5
```
- ### 2、Prediction Code Example
```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='resnet50_vd_imagenet_ssld')
result = model.predict(['/PATH/TO/IMAGE'])
```
- ### 3.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the user_guided_colorization model to fine-tune datasets such as [Flowers](../../docs/reference/datasets.md#class-hubdatasetsflowers) by excuting `python train.py`.
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `hub.datasets.Flowers()` will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
model = hub.Module(name="resnet50_vd_imagenet_ssld", label_list=["roses", "tulips", "daisy", "sunflowers", "dandelion"])
```
* `name`: model name.
* `label_list`: set the output classification category. Default is Imagenet2012 category.
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model;
* `optimizer`: Optimizer selection;
* `use_vdl`: Whether to use vdl to visualize the training process;
* `checkpoint_dir`: The storage address of the model parameters;
* `compare_metrics`: The measurement index of the optimal model;
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset;
* `epochs`: Epochs of training process;
* `batch_size`: Batch size;
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset;
* `log_interval`:The interval for printing logs;
* `save_interval`: The interval for saving model parameters.
- Model prediction
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- ```python
import paddle
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='resnet50_vd_imagenet_ssld', label_list=["roses", "tulips", "daisy", "sunflowers", "dandelion"], load_checkpoint='/PATH/TO/CHECKPOINT')
result = model.predict(['/PATH/TO/IMAGE'])
```
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of classification.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m resnet50_vd_imagenet_ssld
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
$ hub run resnet_v2_50_imagenet --input_path "/PATH/TO/IMAGE"
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- Pneumonia CT analysis model (Pneumonia-CT-LKM-PP) can efficiently complete the detection of lesions and outline the patient's CT images. Through post-processing codes, the number, volume, and lesions of lung lesions can be analyzed. This model has been fully trained by high-resolution and low-resolution CT image data, which can adapt to the examination data collected by different levels of CT imaging equipment.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install Pneumonia_CT_LKM_PP==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- Pneumonia CT analysis model (Pneumonia-CT-LKM-PP) can efficiently complete the detection of lesions and outline the patient's CT images. Through post-processing codes, the number, volume, and lesions of lung lesions can be analyzed. This model has been fully trained by high-resolution and low-resolution CT image data, which can adapt to the examination data collected by different levels of CT imaging equipment. (This module is a submodule of Pneumonia_CT_LKM_PP.)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install Pneumonia_CT_LKM_PP_lung==1.0.0
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- Human Parsing is a fine-grained semantic segmentation task that aims to identify the components (for example, body parts and clothing) of a human image at the pixel level. The PaddleHub Module uses ResNet101 as the backbone network, and accepts input image sizes of 473x473x3.
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install ace2p
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- SkyAR is based on [Castle in the Sky: Dynamic Sky Replacement and Harmonization in Videos](https://arxiv.org/abs/2010.11800). It mainly consists of three parts: sky matting network, motion estimation and image fusion.
- For more information, please refer to:[SkyAR](https://github.com/jiupinjia/SkyAR)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
```shell
$hub install SkyAR
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)