* (3) 下载`JetPack`,请参考[NVIDIA Jetson Linux Developer Guide](https://docs.nvidia.com/jetson/l4t/index.html) 中的`Preparing a Jetson Developer Kit for Use`章节内容进行刷写系统镜像。
* (3) 下载`JetPack`,请参考[NVIDIA Jetson Linux Developer Guide](https://docs.nvidia.com/jetson/l4t/index.html) 中的`Preparing a Jetson Developer Kit for Use`章节内容进行刷写系统镜像。
## 下载或编译`Paddle`预测库
## 下载或编译`Paddle`预测库
本文档使用`Paddle`在`JetPack4.3`上预先编译好的预测库,请根据硬件在[安装与编译 Linux 预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/2.0-rc/guides/05_inference_deployment/inference/build_and_install_lib_cn.html) 中选择对应版本的`Paddle`预测库。
本文档使用`Paddle`在`JetPack4.3`上预先编译好的预测库,请根据硬件在[安装与编译 Linux 预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/inference/build_and_install_lib_cn.html) 中选择对应版本的`Paddle`预测库。
若需要自己在`Jetson`平台上自定义编译`Paddle`库,请参考文档[安装与编译 Linux 预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html) 的`NVIDIA Jetson嵌入式硬件预测库源码编译`部分内容。
若需要自己在`Jetson`平台上自定义编译`Paddle`库,请参考文档[安装与编译 Linux 预测库](https://www.paddlepaddle.org.cn/documentation/docs/zh/advanced_guide/inference_deployment/inference/build_and_install_lib_cn.html) 的`NVIDIA Jetson嵌入式硬件预测库源码编译`部分内容。
PaddlePaddle C++ 预测库针对不同的`CPU`和`CUDA`版本提供了不同的预编译版本,请根据实际情况下载: [C++预测库下载列表](https://www.paddlepaddle.org.cn/documentation/docs/zh/develop/guides/05_inference_deployment/inference/build_and_install_lib_cn.html#linux)
PaddlePaddle C++ 预测库针对不同的`CPU`和`CUDA`版本提供了不同的预编译版本,请根据实际情况下载: [C++预测库下载列表](https://www.paddlepaddle.org.cn/documentation/docs/zh/guides/05_inference_deployment/inference/build_and_install_lib_cn.html#linux)
# To check PaddlePaddle installation in your Python interpreter
>>> import paddle.fluid as fluid
>>> fluid.install_check.run_check()
# To check PaddlePaddle version
For more installation methods such as conda, docker installation, please refer to the instructions in the [installation document](https://www.paddlepaddle.org.cn/install/quick)
Please make sure that your PaddlePaddle is installed successfully and the version is not lower than the required version. Use the following command to verify.
After the test is passed, the following information will be prompted:
PaddleDetection includes support for [COCO](http://cocodataset.org) and [Pascal VOC](http://host.robots.ox.ac.uk/pascal/VOC/) by default, please follow these instructions to set up the dataset.
**Create symlinks for local datasets:**
Default dataset path in config files is `dataset/coco` and `dataset/voc`, if the
datasets are already available on disk, you can simply create symlinks to
In order to enable users to quickly produce models in a short time and master the use of PaddleDetection, this tutorial uses a pre-trained detection model to finetune small data sets. A good model can be produced in a short period of time. In actual business, it is recommended that users select a suitable model configuration file for adaptation according to their needs.
This tutorial fine-tunes a tiny dataset by pretrained detection model for users to get a model and learn PaddleDetection quickly. The model can be trained in around 20min with good performance.
-**Set GPU**
-**Note: before started, need to specifiy the GPU device as follows.**
```bash
```bash
export CUDA_VISIBLE_DEVICES=0
export CUDA_VISIBLE_DEVICES=0
```
```
## Data Preparation
## Quick Start
```
Dataset refers to [Kaggle](https://www.kaggle.com/mbkinaci/fruit-images-for-object-detection), which contains 240 images in train dataset and 60 images in test dataset. Data categories are apple, orange and banana. Download [here](https://dataset.bj.bcebos.com/PaddleDetection_demo/fruit-detection.tar) and uncompress the dataset after download, script for data preparation is located at [download_fruit.py](../../dataset/fruit/download_fruit.py). Command is as follows:
Use `yolov3_mobilenet_v1` to fine-tune the model from COCO dataset.
## Prepare Dataset
The Dataset is [Kaggle dataset](https://www.kaggle.com/andrewmvd/road-sign-detection) ,Contains 877 images, 4 data categories: crosswalk, speedlimit, stop, trafficlight.
The dataset is divided into training set(contains 701 images) and test set(contains 176 images),[download link](https://paddlemodels.bj.bcebos.com/object_detection/roadsign_voc.tar).
Meanwhile, loss and mAP can be observed on VisualDL by set `--use_vdl` and `--vdl_log_dir`. But note Python version required >= 3.5 for VisualDL.
If you want to observe the loss change curve in real time through VisualDL, add --use_vdl=true to the training command, and set the log save path through --vdl_log_dir.
**Note: VisualDL need Python>=3.5**

Please install [VisualDL](https://github.com/PaddlePaddle/VisualDL) first