- input (numpy.ndarray|str): image data,numpy.ndarray or str. ndarray.shape is in the format [H, W, C], BGR;
- input (numpy.ndarray|str): Image data,numpy.ndarray or str. ndarray.shape is in the format [H, W, C], BGR.
- model_select (list\[str\]): Mode selection,\['Colorization'\] only colorize the input image, \['SuperResolution'\] only increase the image resolution;
- model_select (list\[str\]): Mode selection,\['Colorization'\] only colorize the input image, \['SuperResolution'\] only increase the image resolution;
default is \['Colorization', 'SuperResolution'\]。
default is \['Colorization', 'SuperResolution'\]。
- save_path (str): save path, default is 'photo_restoration'.
- save_path (str): Save path, default is 'photo_restoration'.
- **Return**
- **Return**
- output (numpy.ndarray): restoration result,ndarray.shape is in the format [H, W, C], BGR.
- output (numpy.ndarray): Restoration result,ndarray.shape is in the format [H, W, C], BGR.
## IV. Server Deployment
## IV. Server Deployment
...
@@ -111,7 +111,7 @@
...
@@ -111,7 +111,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
-user_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image.
-User_guided_colorization is a colorization model based on "Real-Time User-Guided Image Colorization with Learned Deep Priors",this model uses pre-supplied coloring blocks to color the gray image.
## II. Installation
## II. Installation
...
@@ -40,8 +40,8 @@
...
@@ -40,8 +40,8 @@
$ hub install user_guided_colorization
$ hub install user_guided_colorization
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to: [Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run user_guided_colorization --input_path "/PATH/TO/IMAGE"
$ hub run user_guided_colorization --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
```python
```python
...
@@ -69,6 +71,7 @@
...
@@ -69,6 +71,7 @@
- Steps:
- Steps:
- Step1: Define the data preprocessing method
- Step1: Define the data preprocessing method
- ```python
- ```python
import paddlehub.vision.transforms as T
import paddlehub.vision.transforms as T
...
@@ -77,7 +80,7 @@
...
@@ -77,7 +80,7 @@
T.RGB2LAB()], to_rgb=True)
T.RGB2LAB()], to_rgb=True)
```
```
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* `hub.datasets.Canvas()` The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
* `hub.datasets.Canvas()`: The dataset will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- Step3: Load the pre-trained model
...
@@ -97,7 +100,7 @@
...
@@ -97,7 +100,7 @@
model = hub.Module(name='user_guided_colorization', load_checkpoint=None)
model = hub.Module(name='user_guided_colorization', load_checkpoint=None)
model.set_config(classification=True, prob=1)
model.set_config(classification=True, prob=1)
```
```
* `name`: model name.
* `name`: Model name.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
* `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
* `classification`: The model is trained by two mode. At the beginning, `classification` is set to True, which is used for shallow network training. In the later stage of training, set `classification` to False, which is used to train the output layer of the network.
* `classification`: The model is trained by two mode. At the beginning, `classification` is set to True, which is used for shallow network training. In the later stage of training, set `classification` to False, which is used to train the output layer of the network.
* `prob`: The probability that a priori color block is not added to each input image, the default is 1, that is, no prior color block is added. For example, when `prob` is set to 0.9, the probability that there are two a priori color blocks on a picture is(1-0.9)*(1-0.9)*0.9=0.009.
* `prob`: The probability that a priori color block is not added to each input image, the default is 1, that is, no prior color block is added. For example, when `prob` is set to 0.9, the probability that there are two a priori color blocks on a picture is(1-0.9)*(1-0.9)*0.9=0.009.
...
@@ -115,20 +118,20 @@
...
@@ -115,20 +118,20 @@
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model;
* `model`: Optimized model.
* `optimizer`: Optimizer selection;
* `optimizer`: Optimizer selection.
* `use_vdl`: Whether to use vdl to visualize the training process;
* `use_vdl`: Whether to use vdl to visualize the training process.
* `checkpoint_dir`: The storage address of the model parameters;
* `checkpoint_dir`: The storage address of the model parameters.
* `compare_metrics`: The measurement index of the optimal model;
* `compare_metrics`: The measurement index of the optimal model.
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset;
* `train_dataset`: Training dataset.
* `epochs`: Epochs of training process;
* `epochs`: Epochs of training process.
* `batch_size`: Batch size;
* `batch_size`: Batch size.
* `num_workers`: Number of workers.
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset;
* `eval_dataset`: Validation dataset.
* `log_interval`:The interval for printing logs;
* `log_interval`:The interval for printing logs.
* `save_interval`: The interval for saving model parameters.
* `save_interval`: The interval for saving model parameters.
- Model prediction
- Model prediction
...
@@ -167,8 +170,7 @@
...
@@ -167,8 +170,7 @@
- ### Step 2: Send a predictive request
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
```python
-```python
import cv2
import cv2
import paddlehub as hub
import paddlehub as hub
...
@@ -81,16 +82,16 @@
...
@@ -81,16 +82,16 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): image path.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files;
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): save path of images, "dcscn_output" by default.
* output\_dir (str): Save path of images, "dcscn_output" by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
* data (numpy.ndarray): Result of super resolution.
-```python
-```python
def save_inference_model(self,
def save_inference_model(self,
...
@@ -105,8 +106,8 @@
...
@@ -105,8 +106,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
...
@@ -131,7 +132,7 @@
...
@@ -131,7 +132,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
-falsr_a is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
-Falsr_a is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_a](https://github.com/xiaomi-automl/FALSR)
- For more information, please refer to:[falsr_a](https://github.com/xiaomi-automl/FALSR)
## II. Installation
## II. Installation
...
@@ -42,8 +42,8 @@
...
@@ -42,8 +42,8 @@
$ hub install falsr_a
$ hub install falsr_a
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
```python
-```python
import cv2
import cv2
import paddlehub as hub
import paddlehub as hub
...
@@ -82,10 +83,10 @@
...
@@ -82,10 +83,10 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): image path.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files;
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): save path of images, "dcscn_output" by default.
* output\_dir (str): save path of images, "dcscn_output" by default.
- **Return**
- **Return**
...
@@ -134,7 +135,7 @@
...
@@ -134,7 +135,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
-falsr_b is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
-Falsr_b is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_b](https://github.com/xiaomi-automl/FALSR)
- For more information, please refer to:[falsr_b](https://github.com/xiaomi-automl/FALSR)
...
@@ -42,8 +42,8 @@
...
@@ -42,8 +42,8 @@
$ hub install falsr_b
$ hub install falsr_b
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -82,16 +83,16 @@
...
@@ -82,16 +83,16 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): Image path.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files;
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): save path of images, "dcscn_output" by default.
* output\_dir (str): Save path of images, "dcscn_output" by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
* data (numpy.ndarray): Result of super resolution.
-```python
-```python
def save_inference_model(self,
def save_inference_model(self,
...
@@ -106,8 +107,8 @@
...
@@ -106,8 +107,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
...
@@ -134,7 +135,7 @@
...
@@ -134,7 +135,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
-falsr_c is a lightweight super-resolution model based on `Accurate and Lightweight Super-Resolution with Neural Architecture Search`. The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
-Falsr_c is a lightweight super-resolution model based on "Accurate and Lightweight Super-Resolution with Neural Architecture Search". The model uses a multi-objective approach to deal with the over-segmentation problem, and uses an elastic search strategy based on a hybrid controller to improve the performance of the model. This model provides super resolution result with scale factor x2.
- For more information, please refer to:[falsr_c](https://github.com/xiaomi-automl/FALSR)
- For more information, please refer to:[falsr_c](https://github.com/xiaomi-automl/FALSR)
...
@@ -42,8 +42,8 @@
...
@@ -42,8 +42,8 @@
$ hub install falsr_c
$ hub install falsr_c
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -82,16 +83,16 @@
...
@@ -82,16 +83,16 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data,ndarray.shape is in the format \[H, W, C\],BGR;
* images (list\[numpy.ndarray\]): Image data,ndarray.shape is in the format \[H, W, C\],BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): Image path.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**;
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**.
* visualization (bool): Whether to save the recognition results as picture files;
* visualization (bool): Whether to save the recognition results as picture files.
* output\_dir (str): save path of images, "dcscn_output" by default.
* output\_dir (str): Save path of images, "dcscn_output" by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* res (list\[dict\]): The list of model results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* save\_path (str, optional): Save path of the result, save_path is '' if no image is saved.
* data (numpy.ndarray): result of super resolution.
* data (numpy.ndarray): Result of super resolution.
-```python
-```python
def save_inference_model(self,
def save_inference_model(self,
...
@@ -106,8 +107,8 @@
...
@@ -106,8 +107,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
...
@@ -134,7 +135,7 @@
...
@@ -134,7 +135,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
-realsr is a super resolution model for image and video based on "Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Mode". This model provides super resolution result with scale factor x4.
-Realsr is a super resolution model for image and video based on "Toward Real-World Single Image Super-Resolution: A New Benchmark and A New Mode". This model provides super resolution result with scale factor x4.
- For more information, please refer to: [realsr](https://github.com/csjcai/RealSR)
- For more information, please refer to: [realsr](https://github.com/csjcai/RealSR)
...
@@ -47,8 +47,8 @@
...
@@ -47,8 +47,8 @@
$ hub install realsr
$ hub install realsr
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -89,7 +92,7 @@
...
@@ -89,7 +92,7 @@
- **Parameter**
- **Parameter**
- data(list[dict]): each element in the list is dict and each field is:
- data(list[dict]): Each element in the list is dict and each field is:
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- CycleGAN belongs to Generative Adversarial Networks(GANs). Unlike traditional GANs that can only generate pictures in one direction, CycleGAN can simultaneously complete the style transfer of two domains. The PaddleHub Module is trained by Cityscapes dataset, and supports the conversion from real images to semantic segmentation results, and also supports conversion from semantic segmentation results to real images.
- CycleGAN belongs to Generative Adversarial Networks(GANs). Unlike traditional GANs that can only generate pictures in one direction, CycleGAN can simultaneously complete the style transfer of two domains. The PaddleHub Module is trained by Cityscapes dataset, and supports the conversion from real images to semantic segmentation results, and also supports conversion from semantic segmentation results to real images.
...
@@ -42,15 +41,15 @@
...
@@ -42,15 +41,15 @@
- paddlepaddle >= 1.4.0
- paddlepaddle >= 1.4.0
- paddlehub >= 1.1.0 | [How to install PaddleHub](../../../../docs/docs_ch/get_start/installation.rst)
- paddlehub >= 1.1.0
- ### 2、Installation
- ### 2、Installation
-```shell
-```shell
$ hub install cyclegan_cityscapes==1.0.0
$ hub install cyclegan_cityscapes==1.0.0
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
$ hub run cyclegan_cityscapes --input_path "/PATH/TO/IMAGE"
$ hub run cyclegan_cityscapes --input_path "/PATH/TO/IMAGE"
```
```
-**Parameters**
-**Parameters**
- input_path: image path
- input_path: image path
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -90,13 +91,13 @@
...
@@ -90,13 +91,13 @@
- **Parameters**
- **Parameters**
- data(list[dict]): each element in the list is dict and each field is:
- data(list[dict]): Each element in the list is dict and each field is:
- image (list\[str\]): image path.
- image (list\[str\]): Image path.
- **Return**
- **Return**
- res (list\[str\]): The list of style transfer results, where each element is dict and each field is:
- res (list\[str\]): The list of style transfer results, where each element is dict and each field is:
- STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged".
- STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged".
## II. Installation
## II. Installation
...
@@ -40,8 +40,8 @@
...
@@ -40,8 +40,8 @@
-```shell
-```shell
$ hub install stargan_celeba==1.0.0
$ hub install stargan_celeba==1.0.0
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options.
- style: Specify the attributes to be converted. The options are "Black_Hair", "Blond_Hair", "Brown_Hair", "Female", "Male", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- STGAN takes the difference between the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
- STGAN takes the original attribute and the target attribute as input, and proposes STUs (Selective transfer units) to select and modify features of the encoder. The PaddleHub Module is trained one Celeba dataset and currently supports attributes of "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged".
## II. Installation
## II. Installation
...
@@ -40,8 +40,8 @@
...
@@ -40,8 +40,8 @@
-```shell
-```shell
$ hub install stgan_celeba==1.0.0
$ hub install stgan_celeba==1.0.0
```
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_ch/get_start/windows_quickstart.md)
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
- info: attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair".
- info: Attributes of original image, must fill in gender( "Male" or "Female").The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". For example, the input picture is a girl with black hair, then fill in as "Female,Black_Hair".
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- style: Specify the attributes to be converted. The options are "Bald", "Bangs", "Black_Hair", "Blond_Hair", "Brown_Hair", "Bushy_Eyebrows", "Eyeglasses", "Gender", "Mouth_Slightly_Open", "Mustache", "No_Beard", "Pale_Skin", "Aged". You can choose one of the options.
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -88,7 +89,7 @@
...
@@ -88,7 +89,7 @@
- **Parameter**
- **Parameter**
- data(list[dict]): each element in the list is dict and each field is:
- data(list[dict]): Each element in the list is dict and each field is:
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- image (list\[str\]): Each element in the list is the path of the image to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- style (list\[str\]): Each element in the list is a string, fill in the face attributes to be converted.
- info (list\[str\]): Represents the face attributes of the original image. Different attributes are separated by commas.
- info (list\[str\]): Represents the face attributes of the original image. Different attributes are separated by commas.
$ hub run msgnet --input_path "/PATH/TO/ORIGIN/IMAGE" --style_path "/PATH/TO/STYLE/IMAGE"
$ hub run msgnet --input_path "/PATH/TO/ORIGIN/IMAGE" --style_path "/PATH/TO/STYLE/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
```python
- ```python
importpaddle
import paddle
importpaddlehubashub
import paddlehub as hub
if__name__=='__main__':
if __name__ == '__main__':
model = hub.Module(name='msgnet')
model = hub.Module(name='msgnet')
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
```
```
- ### 3.Fine-tune and Encapsulation
- ### 3.Fine-tune and Encapsulation
...
@@ -111,7 +111,7 @@ if __name__ == '__main__':
...
@@ -111,7 +111,7 @@ if __name__ == '__main__':
- Model prediction
- Model prediction
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
- ```python
import paddle
import paddle
import paddlehub as hub
import paddlehub as hub
...
@@ -120,10 +120,10 @@ if __name__ == '__main__':
...
@@ -120,10 +120,10 @@ if __name__ == '__main__':
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
result = model.predict(origin=["/PATH/TO/ORIGIN/IMAGE"], style="/PATH/TO/STYLE/IMAGE", visualization=True, save_path ="/PATH/TO/SAVE/IMAGE")
```
```
- **Args**
- **Parameters**
* `origin`: Image path or ndarray data with format [H, W, C], BGR;
* `origin`: Image path or ndarray data with format [H, W, C], BGR.
* `style`: Style image path;
* `style`: Style image path.
* `visualization`: Whether to save the recognition results as picture files;
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'style_tranfer'.
* `save_path`: Save path of the result, default is 'style_tranfer'.
...
@@ -148,7 +148,7 @@ if __name__ == '__main__':
...
@@ -148,7 +148,7 @@ if __name__ == '__main__':
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
$ hub run resnet50_vd_animals --input_path "/PATH/TO/IMAGE"
$ hub run resnet50_vd_animals --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_ch/tutorial/cmd_usage.rst)
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
...
@@ -142,7 +142,7 @@
...
@@ -142,7 +142,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- `transforms` The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- `transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
- Step2: Download the dataset
- Step2: Download the dataset
...
@@ -108,20 +108,20 @@
...
@@ -108,20 +108,20 @@
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
- `Trainer` mainly control the training of Fine-tune, including the following controllable parameters:
* `model`: Optimized model;
* `model`: Optimized model.
* `optimizer`: Optimizer selection;
* `optimizer`: Optimizer selection.
* `use_vdl`: Whether to use vdl to visualize the training process;
* `use_vdl`: Whether to use vdl to visualize the training process.
* `checkpoint_dir`: The storage address of the model parameters;
* `checkpoint_dir`: The storage address of the model parameters.
* `compare_metrics`: The measurement index of the optimal model;
* `compare_metrics`: The measurement index of the optimal model.
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
- `trainer.train` mainly control the specific training process, including the following controllable parameters:
* `train_dataset`: Training dataset;
* `train_dataset`: Training dataset.
* `epochs`: Epochs of training process;
* `epochs`: Epochs of training process.
* `batch_size`: Batch size;
* `batch_size`: Batch size.
* `num_workers`: Number of workers.
* `num_workers`: Number of workers.
* `eval_dataset`: Validation dataset;
* `eval_dataset`: Validation dataset.
* `log_interval`:The interval for printing logs;
* `log_interval`:The interval for printing logs.
* `save_interval`: The interval for saving model parameters.
* `save_interval`: The interval for saving model parameters.
...
@@ -159,7 +159,7 @@
...
@@ -159,7 +159,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
```python
-```python
import paddlehub as hub
import paddlehub as hub
import cv2
import cv2
...
@@ -72,9 +75,9 @@
...
@@ -72,9 +75,9 @@
result = human_parser.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')])
result = human_parser.segmentation(images=[cv2.imread('/PATH/TO/IMAGE')])
```
```
- ### 3、API
- ### 3、API
```python
-```python
def segmentation(images=None,
def segmentation(images=None,
paths=None,
paths=None,
batch_size=1,
batch_size=1,
...
@@ -87,21 +90,21 @@
...
@@ -87,21 +90,21 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): Image path.
* batch\_size (int): batch size;
* batch\_size (int): Batch size.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* output\_dir (str): save path of output, default is 'ace2p_output';
* output\_dir (str): Save path of output, default is 'ace2p_output'.
* visualization (bool): Whether to save the recognition results as picture files.
* visualization (bool): Whether to save the recognition results as picture files.
- **Return**
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result;
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
* data (numpy.ndarray): The result of portrait segmentation.
```python
-```python
def save_inference_model(dirname,
def save_inference_model(dirname,
model_filename=None,
model_filename=None,
params_filename=None,
params_filename=None,
...
@@ -112,8 +115,8 @@
...
@@ -112,8 +115,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: mMdel file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
...
@@ -125,7 +128,7 @@
...
@@ -125,7 +128,7 @@
- Run the startup command:
- Run the startup command:
```shell
-```shell
$ hub serving start -m ace2p
$ hub serving start -m ace2p
```
```
...
@@ -138,7 +141,7 @@
...
@@ -138,7 +141,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
hub run deeplabv3p_xception65_humanseg --input_path "/PATH/TO/IMAGE"
hub run deeplabv3p_xception65_humanseg --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
hub run humanseg_lite --input_path "/PATH/TO/IMAGE"
hub run humanseg_lite --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
- Image segmentation and video segmentation example:
- Image segmentation and video segmentation example:
```python
-```python
import cv2
import cv2
import paddlehub as hub
import paddlehub as hub
...
@@ -67,7 +70,7 @@
...
@@ -67,7 +70,7 @@
```
```
- Video prediction example:
- Video prediction example:
```python
- ```python
import cv2
import cv2
import numpy as np
import numpy as np
import paddlehub as hub
import paddlehub as hub
...
@@ -99,7 +102,7 @@
...
@@ -99,7 +102,7 @@
- ### 3、API
- ### 3、API
```python
- ```python
def segment(images=None,
def segment(images=None,
paths=None,
paths=None,
batch_size=1,
batch_size=1,
...
@@ -112,20 +115,20 @@
...
@@ -112,20 +115,20 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): image path.
* batch\_size (int): batch size;
* batch\_size (int): batch size.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* visualization (bool): Whether to save the results as picture files;
* visualization (bool): Whether to save the results as picture files.
* output\_dir (str): save path of images, humanseg_lite_output by default.
* output\_dir (str): save path of images, humanseg_lite_output by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result;
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
* data (numpy.ndarray): The result of portrait segmentation.
```python
- ```python
def video_stream_segment(self,
def video_stream_segment(self,
frame_org,
frame_org,
frame_id,
frame_id,
...
@@ -133,26 +136,25 @@
...
@@ -133,26 +136,25 @@
prev_cfd,
prev_cfd,
use_gpu=False):
use_gpu=False):
```
```
- Prediction API, used to segment video portraits frame by frame.
- Prediction API, used to segment video portraits frame by frame.
- **Parameter**
- **Parameter**
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR;
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR.
* frame_id (int): The number of the current frame;
* frame_id (int): The number of the current frame.
* prev_gray (numpy.ndarray): Grayscale image of the previous network input;
* prev_gray (numpy.ndarray): Grayscale image of the previous network input.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- **Return**
- **Return**
* img_matting (numpy.ndarray): The result of portrait segmentation;
* img_matting (numpy.ndarray): The result of portrait segmentation.
* cur_gray (numpy.ndarray): Grayscale image of the current network input;
* cur_gray (numpy.ndarray): Grayscale image of the current network input.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
```python
- ```python
def video_segment(self,
def video_segment(self,
video_path=None,
video_path=None,
use_gpu=False,
use_gpu=False,
...
@@ -164,11 +166,11 @@
...
@@ -164,11 +166,11 @@
- **Parameter**
- **Parameter**
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
hub run humanseg_mobile --input_path "/PATH/TO/IMAGE"
hub run humanseg_mobile --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
- Image segmentation and video segmentation example:
- Image segmentation and video segmentation example:
```python
```python
...
@@ -112,17 +115,17 @@
...
@@ -112,17 +115,17 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): image path.
* batch\_size (int): batch size;
* batch\_size (int): batch size.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* visualization (bool): Whether to save the results as picture files;
* visualization (bool): Whether to save the results as picture files.
* output\_dir (str): save path of images, humanseg_mobile_output by default.
* output\_dir (str): save path of images, humanseg_mobile_output by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result;
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
* data (numpy.ndarray): The result of portrait segmentation.
```python
```python
...
@@ -138,17 +141,17 @@
...
@@ -138,17 +141,17 @@
- **Parameter**
- **Parameter**
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR;
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR.
* frame_id (int): The number of the current frame;
* frame_id (int): The number of the current frame.
* prev_gray (numpy.ndarray): Grayscale image of the previous network input;
* prev_gray (numpy.ndarray): Grayscale image of the previous network input.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- **Return**
- **Return**
* img_matting (numpy.ndarray): The result of portrait segmentation;
* img_matting (numpy.ndarray): The result of portrait segmentation.
* cur_gray (numpy.ndarray): Grayscale image of the current network input;
* cur_gray (numpy.ndarray): Grayscale image of the current network input.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
...
@@ -164,7 +167,7 @@
...
@@ -164,7 +167,7 @@
- **Parameter**
- **Parameter**
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* save\_dir (str): save path of video.
* save\_dir (str): save path of video.
...
@@ -181,8 +184,8 @@
...
@@ -181,8 +184,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
hub run humanseg_server --input_path "/PATH/TO/IMAGE"
hub run humanseg_server --input_path "/PATH/TO/IMAGE"
```
```
- If you want to call the Hub module through the command line, please refer to: [PaddleHub Command Line Instruction](../../../../docs/docs_en/tutorial/cmd_usage.rst)
- ### 2、Prediction Code Example
- ### 2、Prediction Code Example
- Image segmentation and video segmentation example:
- Image segmentation and video segmentation example:
```python
```python
...
@@ -112,17 +113,17 @@
...
@@ -112,17 +113,17 @@
- **Parameter**
- **Parameter**
* images (list\[numpy.ndarray\]): image data, ndarray.shape is in the format [H, W, C], BGR;
* images (list\[numpy.ndarray\]): Image data, ndarray.shape is in the format [H, W, C], BGR.
* paths (list\[str\]): image path;
* paths (list\[str\]): Image path.
* batch\_size (int): batch size;
* batch\_size (int): Batch size.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* visualization (bool): Whether to save the results as picture files;
* visualization (bool): Whether to save the results as picture files.
* output\_dir (str): save path of images, humanseg_server_output by default.
* output\_dir (str): Save path of images, humanseg_server_output by default.
- **Return**
- **Return**
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* res (list\[dict\]): The list of recognition results, where each element is dict and each field is:
* save\_path (str, optional): Save path of the result;
* save\_path (str, optional): Save path of the result.
* data (numpy.ndarray): The result of portrait segmentation.
* data (numpy.ndarray): The result of portrait segmentation.
```python
```python
...
@@ -138,17 +139,17 @@
...
@@ -138,17 +139,17 @@
- **Parameter**
- **Parameter**
* frame_org (numpy.ndarray): single frame for prediction,ndarray.shape is in the format [H, W, C], BGR;
* frame_org (numpy.ndarray): Single frame for prediction,ndarray.shape is in the format [H, W, C], BGR.
* frame_id (int): The number of the current frame;
* frame_id (int): The number of the current frame.
* prev_gray (numpy.ndarray): Grayscale image of the previous network input;
* prev_gray (numpy.ndarray): Grayscale image of the previous network input.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* prev_cfd (numpy.ndarray): The fusion image from optical flow and the prediction result from previous frame.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
- **Return**
- **Return**
* img_matting (numpy.ndarray): The result of portrait segmentation;
* img_matting (numpy.ndarray): The result of portrait segmentation.
* cur_gray (numpy.ndarray): Grayscale image of the current network input;
* cur_gray (numpy.ndarray): Grayscale image of the current network input.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
* optflow_map (numpy.ndarray): The fusion image from optical flow and the prediction result from current frame.
...
@@ -164,8 +165,8 @@
...
@@ -164,8 +165,8 @@
- **Parameter**
- **Parameter**
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* video\_path (str): Video path for segmentation。If None, the video will be obtained from the local camera, and a window will display the online segmentation result.
* use\_gpu (bool): use GPU or not; **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* use\_gpu (bool): Use GPU or not. **set the CUDA_VISIBLE_DEVICES environment variable first if you are using GPU**
* save\_dir (str): save path of video.
* save\_dir (str): Save path of video.
```python
```python
...
@@ -181,8 +182,8 @@
...
@@ -181,8 +182,8 @@
- **Parameters**
- **Parameters**
* dirname: Save path.
* dirname: Save path.
* model\_filename: model file name,defalt is \_\_model\_\_
* model\_filename: Model file name,defalt is \_\_model\_\_
* params\_filename: parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* params\_filename: Parameter file name,defalt is \_\_params\_\_(Only takes effect when `combined` is True)
* combined: Whether to save the parameters to a unified file.
* combined: Whether to save the parameters to a unified file.
...
@@ -195,7 +196,7 @@
...
@@ -195,7 +196,7 @@
- Run the startup command:
- Run the startup command:
```shell
- ```shell
$ hub serving start -m humanseg_server
$ hub serving start -m humanseg_server
```
```
...
@@ -207,7 +208,7 @@
...
@@ -207,7 +208,7 @@
- With a configured server, use the following lines of code to send the prediction request and obtain the result
- With a configured server, use the following lines of code to send the prediction request and obtain the result
```python
- ```python
import requests
import requests
import json
import json
import base64
import base64
...
@@ -245,7 +246,7 @@
...
@@ -245,7 +246,7 @@
- 1.1.0
- 1.1.0
Added video portrait split interface
Added video portrait segmentation interface
Added video stream portrait segmentation interface
Added video stream portrait segmentation interface