未验证 提交 3effde37 编写于 作者: X Xiaoyao Xi 提交者: GitHub

Merge pull request #66 from wangxiao1021/api

update README
...@@ -240,7 +240,7 @@ CUDA_VISIBLE_DEVICES=2 python run.py ...@@ -240,7 +240,7 @@ CUDA_VISIBLE_DEVICES=2 python run.py
CUDA_VISIBLE_DEVICES=2,3 python run.py CUDA_VISIBLE_DEVICES=2,3 python run.py
``` ```
在多GPU模式下,PaddlePALM会自动将每批数据分配到可用的卡上。例如,如果`batch_size`设置为64,并且有4个GPU可以用于PaddlePALM,那么每个GPU中的batch_size实际上是64/4=16。因此,**当使用多个GPU时,您需要确保设置batch_size可以整除卡片的数量** 在多GPU模式下,PaddlePALM会自动将每个batch数据分配到可用的GPU上。例如,如果`batch_size`设置为64,并且有4个GPU可以用于PaddlePALM,那么每个GPU中的batch_size实际上是64/4=16。因此,**当使用多个GPU时,您需要确保batch_size可以被暴露给PALM的GPU数量整除**
## 许可证书 ## 许可证书
......
...@@ -41,9 +41,11 @@ python run.py ...@@ -41,9 +41,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
......
...@@ -49,9 +49,11 @@ python run.py ...@@ -49,9 +49,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
``` ```
......
...@@ -54,9 +54,11 @@ python run.py ...@@ -54,9 +54,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
``` ```
...@@ -84,7 +86,16 @@ After the run, you can view the saved models in the `outputs/` folder and the pr ...@@ -84,7 +86,16 @@ After the run, you can view the saved models in the `outputs/` folder and the pr
### Step 3: Evaluate ### Step 3: Evaluate
Once you have the prediction, you can run the evaluation script to evaluate the model: #### Library Dependencies
Before the evaluation, you need to install `nltk` and download the `punkt` tokenizer for nltk:
```shell
pip insall nltk
python -m nltk.downloader punkt
```
#### Evaluate
You can run the evaluation script to evaluate the model:
```shell ```shell
python evaluate.py python evaluate.py
......
...@@ -57,9 +57,11 @@ python run.py ...@@ -57,9 +57,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
``` ```
...@@ -83,10 +85,12 @@ python predict-intent.py ...@@ -83,10 +85,12 @@ python predict-intent.py
If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python predict-slot.py CUDA_VISIBLE_DEVICES=0,1 python predict-slot.py
CUDA_VISIBLE_DEVICES=0,1,2 python predict-intent.py CUDA_VISIBLE_DEVICES=0,1 python predict-intent.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
After the run, you can view the predictions in the `outputs/predict-slot` folder and `outputs/predict-intent` folder. Here are some examples of predictions: After the run, you can view the predictions in the `outputs/predict-slot` folder and `outputs/predict-intent` folder. Here are some examples of predictions:
`atis_slot`: `atis_slot`:
......
...@@ -10,9 +10,11 @@ python run.py ...@@ -10,9 +10,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
......
...@@ -43,9 +43,11 @@ python run.py ...@@ -43,9 +43,11 @@ python run.py
If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example:
```shell ```shell
CUDA_VISIBLE_DEVICES=0,1,2 python run.py CUDA_VISIBLE_DEVICES=0,1 python run.py
``` ```
Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.**
Some logs will be shown below: Some logs will be shown below:
``` ```
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册