diff --git a/README_zh.md b/README_zh.md index 4023775e885e5d498e9624fcc3ca5dcb6b18798e..e1f68891bd32f9098442979204271b07952309fd 100644 --- a/README_zh.md +++ b/README_zh.md @@ -240,7 +240,7 @@ CUDA_VISIBLE_DEVICES=2 python run.py CUDA_VISIBLE_DEVICES=2,3 python run.py ``` -在多GPU模式下,PaddlePALM会自动将每批数据分配到可用的卡上。例如,如果`batch_size`设置为64,并且有4个GPU可以用于PaddlePALM,那么每个GPU中的batch_size实际上是64/4=16。因此,**当使用多个GPU时,您需要确保设置batch_size可以整除卡片的数量**。 +在多GPU模式下,PaddlePALM会自动将每个batch数据分配到可用的GPU上。例如,如果`batch_size`设置为64,并且有4个GPU可以用于PaddlePALM,那么每个GPU中的batch_size实际上是64/4=16。因此,**当使用多个GPU时,您需要确保batch_size可以被暴露给PALM的GPU数量整除**。 ## 许可证书 diff --git a/examples/classification/README.md b/examples/classification/README.md index 4ac05170078c858a2399e9659cd1145e76920b93..e479697e754f27846c7e101f88716d89b2ac375d 100644 --- a/examples/classification/README.md +++ b/examples/classification/README.md @@ -41,9 +41,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: diff --git a/examples/matching/README.md b/examples/matching/README.md index aecb97f405353db0efdee234c023dc33a11de9c7..43b3383cb55ddb88234b61cc3a02e6f4ca19ed3f 100644 --- a/examples/matching/README.md +++ b/examples/matching/README.md @@ -49,9 +49,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: ``` diff --git a/examples/mrc/README.md b/examples/mrc/README.md index 6d01a3563d9398dfd32fe18280da6d83edbc9a4f..e22cc83c0414f1feab9c114790a4be8c49bb0964 100644 --- a/examples/mrc/README.md +++ b/examples/mrc/README.md @@ -54,9 +54,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: ``` @@ -84,7 +86,16 @@ After the run, you can view the saved models in the `outputs/` folder and the pr ### Step 3: Evaluate -Once you have the prediction, you can run the evaluation script to evaluate the model: +#### Library Dependencies +Before the evaluation, you need to install `nltk` and download the `punkt` tokenizer for nltk: + +```shell +pip insall nltk +python -m nltk.downloader punkt +``` + +#### Evaluate +You can run the evaluation script to evaluate the model: ```shell python evaluate.py diff --git a/examples/multi-task/README.md b/examples/multi-task/README.md index 63038ab0fd5368a4c183b6e0a461d021e9a72e7a..f5405619c61cb65d3967033281b9f06bb6f1ef8f 100644 --- a/examples/multi-task/README.md +++ b/examples/multi-task/README.md @@ -57,9 +57,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: ``` @@ -83,10 +85,12 @@ python predict-intent.py If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python predict-slot.py -CUDA_VISIBLE_DEVICES=0,1,2 python predict-intent.py +CUDA_VISIBLE_DEVICES=0,1 python predict-slot.py +CUDA_VISIBLE_DEVICES=0,1 python predict-intent.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + After the run, you can view the predictions in the `outputs/predict-slot` folder and `outputs/predict-intent` folder. Here are some examples of predictions: `atis_slot`: diff --git a/examples/predict/README.md b/examples/predict/README.md index 19743f09642f68f8dd0bb118d91d5e1812d6cc95..1717e5c710ed72f958d2b0bb41454343971c0060 100644 --- a/examples/predict/README.md +++ b/examples/predict/README.md @@ -10,9 +10,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for predict, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: diff --git a/examples/tagging/README.md b/examples/tagging/README.md index 465e611ea1396f125b4547391a7a68e073a6930e..236b7323119f93785de753243e57880fea273b4a 100644 --- a/examples/tagging/README.md +++ b/examples/tagging/README.md @@ -43,9 +43,11 @@ python run.py If you want to specify a specific gpu or use multiple gpus for training, please use **`CUDA_VISIBLE_DEVICES`**, for example: ```shell -CUDA_VISIBLE_DEVICES=0,1,2 python run.py +CUDA_VISIBLE_DEVICES=0,1 python run.py ``` +Note: On multi-gpu mode, PaddlePALM will automatically split each batch onto the available cards. For example, if the `batch_size` is set 64, and there are 4 cards visible for PaddlePALM, then the batch_size in each card is actually 64/4=16. If you want to change the `batch_size` or the number of gpus used in the example, **you need to ensure that the set batch_size can be divided by the number of cards.** + Some logs will be shown below: ```