model = hub.Module(name='ann_resnet50_cityscapes')
img = cv2.imread("/PATH/TO/IMAGE")
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the ann_resnet50_cityscapes model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='ann_resnet50_cityscapes', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='ann_resnet50_cityscapes', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ann_resnet50_cityscapes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the ann_resnet50_voc model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='ann_resnet50_voc', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='ann_resnet50_voc', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m ann_resnet50_voc
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
model = hub.Module(name='danet_resnet50_cityscapes')
img = cv2.imread("/PATH/TO/IMAGE")
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the danet_resnet50_cityscapes model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='danet_resnet50_cityscapes', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='danet_resnet50_cityscapes', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m danet_resnet50_cityscapes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the danet_resnet50_voc model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='danet_resnet50_voc', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='danet_resnet50_voc', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m danet_resnet50_voc
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
model = hub.Module(name='isanet_resnet50_cityscapes')
img = cv2.imread("/PATH/TO/IMAGE")
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the isanet_resnet50_cityscapes model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='isanet_resnet50_cityscapes', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='isanet_resnet50_cityscapes', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m isanet_resnet50_cityscapes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the isanet_resnet50_voc model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='isanet_resnet50_voc', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='isanet_resnet50_voc', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m isanet_resnet50_voc
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- We will show how to use PaddleHub to finetune the pre-trained model and complete the prediction.
- For more information, please refer to: [pspnet](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install pspnet_resnet50_cityscapes
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
model = hub.Module(name='pspnet_resnet50_cityscapes')
img = cv2.imread("/PATH/TO/IMAGE")
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the pspnet_resnet50_cityscapes model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='pspnet_resnet50_cityscapes', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='pspnet_resnet50_cityscapes', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pspnet_resnet50_cityscapes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- We will show how to use PaddleHub to finetune the pre-trained model and complete the prediction.
- For more information, please refer to: [pspnet](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install pspnet_resnet50_voc
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the pspnet_resnet50_voc model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='pspnet_resnet50_voc', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='pspnet_resnet50_voc', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m pspnet_resnet50_voc
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
- We will show how to use PaddleHub to finetune the pre-trained model and complete the prediction.
- For more information, please refer to: [pspnet](https://openaccess.thecvf.com/content_cvpr_2017/papers/Zhao_Pyramid_Scene_Parsing_CVPR_2017_paper.pdf)
## II. Installation
- ### 1、Environmental Dependence
- paddlepaddle >= 2.0.0
- paddlehub >= 2.0.0
- ### 2、Installation
- ```shell
$ hub install stdc1_seg_cityscapes
```
- In case of any problems during installation, please refer to:[Windows_Quickstart](../../../../docs/docs_en/get_start/windows_quickstart.md)
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the stdc1_seg_cityscapes model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='stdc1_seg_cityscapes', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='stdc1_seg_cityscapes', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m stdc1_seg_cityscapes
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
Fan, Mingyuan, et al. "Rethinking BiSeNet For Real-time Semantic Segmentation."
(https://arxiv.org/abs/2104.13188)
Args:
num_classes(int,optional): The unique number of target classes.
use_boundary_8(bool,non-optional): Whether to use detail loss. it should be True accroding to paper for best metric. Default: True.
Actually,if you want to use _boundary_2/_boundary_4/_boundary_16,you should append loss function number of DetailAggregateLoss.It should work properly.
use_conv_last(bool,optional): Determine ContextPath 's inplanes variable according to whether to use bockbone's last conv. Default: False.
pretrained (str, optional): The path or url of pretrained model. Default: None.
result = model.predict(images=[img], visualization=True)
```
- ### 2.Fine-tune and Encapsulation
- After completing the installation of PaddlePaddle and PaddleHub, you can start using the stdc1_seg_voc model to fine-tune datasets such as OpticDiscSeg.
- Steps:
- Step1: Define the data preprocessing method
- ```python
from paddlehub.vision.segmentation_transforms import Compose, Resize, Normalize
- `segmentation_transforms`: The data enhancement module defines lots of data preprocessing methods. Users can replace the data preprocessing methods according to their needs.
* `mode`: Select the data mode, the options are `train`, `test`, `val`. Default is `train`.
* Dataset preparation can be referred to [opticdiscseg.py](../../paddlehub/datasets/opticdiscseg.py)。`hub.datasets.OpticDiscSeg()`will be automatically downloaded from the network and decompressed to the `$HOME/.paddlehub/dataset` directory under the user directory.
- Step3: Load the pre-trained model
- ```python
import paddlehub as hub
model = hub.Module(name='stdc1_seg_voc', num_classes=2, pretrained=None)
```
- `name`: model name.
- `load_checkpoint`: Whether to load the self-trained model, if it is None, load the provided parameters.
- When Fine-tune is completed, the model with the best performance on the verification set will be saved in the `${CHECKPOINT_DIR}/best_model` directory. We use this model to make predictions. The `predict.py` script is as follows:
```python
import paddle
import cv2
import paddlehub as hub
if __name__ == '__main__':
model = hub.Module(name='stdc1_seg_voc', pretrained='/PATH/TO/CHECKPOINT')
img = cv2.imread("/PATH/TO/IMAGE")
model.predict(images=[img], visualization=True)
```
- **Args**
* `images`: Image path or ndarray data with format [H, W, C], BGR.
* `visualization`: Whether to save the recognition results as picture files.
* `save_path`: Save path of the result, default is 'seg_result'.
## IV. Server Deployment
- PaddleHub Serving can deploy an online service of image segmentation.
- ### Step 1: Start PaddleHub Serving
- Run the startup command:
- ```shell
$ hub serving start -m stdc1_seg_voc
```
- The servitization API is now deployed and the default port number is 8866.
- **NOTE:** If GPU is used for prediction, set CUDA_VISIBLE_DEVICES environment variable before the service, otherwise it need not be set.
- ### Step 2: Send a predictive request
- With a configured server, use the following lines of code to send the prediction request and obtain the result:
Fan, Mingyuan, et al. "Rethinking BiSeNet For Real-time Semantic Segmentation."
(https://arxiv.org/abs/2104.13188)
Args:
num_classes(int,optional): The unique number of target classes.
use_boundary_8(bool,non-optional): Whether to use detail loss. it should be True accroding to paper for best metric. Default: True.
Actually,if you want to use _boundary_2/_boundary_4/_boundary_16,you should append loss function number of DetailAggregateLoss.It should work properly.
use_conv_last(bool,optional): Determine ContextPath 's inplanes variable according to whether to use bockbone's last conv. Default: False.
pretrained (str, optional): The path or url of pretrained model. Default: None.