提交 b63088b9 编写于 作者: 文幕地方's avatar 文幕地方

rm kd

上级 39e85f7c
...@@ -12,9 +12,8 @@ ...@@ -12,9 +12,8 @@
- [2.3. 更换Backbone 训练](#23-更换backbone-训练) - [2.3. 更换Backbone 训练](#23-更换backbone-训练)
- [2.4. 混合精度训练](#24-混合精度训练) - [2.4. 混合精度训练](#24-混合精度训练)
- [2.5. 分布式训练](#25-分布式训练) - [2.5. 分布式训练](#25-分布式训练)
- [2.6. 知识蒸馏训练](#26-知识蒸馏训练) - [2.6. 其他训练环境](#26-其他训练环境)
- [2.7. 其他训练环境](#27-其他训练环境) - [2.7. 模型微调](#27-模型微调)
- [2.8 模型微调](#28-模型微调)
- [3. 模型评估与预测](#3-模型评估与预测) - [3. 模型评估与预测](#3-模型评估与预测)
- [3.1. 指标评估](#31-指标评估) - [3.1. 指标评估](#31-指标评估)
- [3.2. 测试表格结构识别效果](#32-测试表格结构识别效果) - [3.2. 测试表格结构识别效果](#32-测试表格结构识别效果)
...@@ -204,11 +203,8 @@ python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1 ...@@ -204,11 +203,8 @@ python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1
**注意:** (1)采用多机多卡训练时,需要替换上面命令中的ips值为您机器的地址,机器之间需要能够相互ping通;(2)训练时需要在多个机器上分别启动命令。查看机器ip地址的命令为`ifconfig`;(3)更多关于分布式训练的性能优势等信息,请参考:[分布式训练教程](./distributed_training.md)。 **注意:** (1)采用多机多卡训练时,需要替换上面命令中的ips值为您机器的地址,机器之间需要能够相互ping通;(2)训练时需要在多个机器上分别启动命令。查看机器ip地址的命令为`ifconfig`;(3)更多关于分布式训练的性能优势等信息,请参考:[分布式训练教程](./distributed_training.md)。
## 2.6. 知识蒸馏训练
coming soon! ## 2.6. 其他训练环境
## 2.7. 其他训练环境
- Windows GPU/CPU - Windows GPU/CPU
在Windows平台上与Linux平台略有不同: 在Windows平台上与Linux平台略有不同:
...@@ -221,7 +217,7 @@ Windows平台只支持`单卡`的训练与预测,指定GPU进行训练`set CUD ...@@ -221,7 +217,7 @@ Windows平台只支持`单卡`的训练与预测,指定GPU进行训练`set CUD
- Linux DCU - Linux DCU
DCU设备上运行需要设置环境变量 `export HIP_VISIBLE_DEVICES=0,1,2,3`,其余训练评估预测命令与Linux GPU完全相同。 DCU设备上运行需要设置环境变量 `export HIP_VISIBLE_DEVICES=0,1,2,3`,其余训练评估预测命令与Linux GPU完全相同。
## 2.8 模型微调 ## 2.7. 模型微调
实际使用过程中,建议加载官方提供的预训练模型,在自己的数据集中进行微调,关于模型的微调方法,请参考:[模型微调教程](./finetune.md)。 实际使用过程中,建议加载官方提供的预训练模型,在自己的数据集中进行微调,关于模型的微调方法,请参考:[模型微调教程](./finetune.md)。
......
...@@ -12,9 +12,8 @@ This article provides a full-process guide for the PaddleOCR table recognition m ...@@ -12,9 +12,8 @@ This article provides a full-process guide for the PaddleOCR table recognition m
- [2.3. Training with New Backbone](#23-training-with-new-backbone) - [2.3. Training with New Backbone](#23-training-with-new-backbone)
- [2.4. Mixed Precision Training](#24-mixed-precision-training) - [2.4. Mixed Precision Training](#24-mixed-precision-training)
- [2.5. Distributed Training](#25-distributed-training) - [2.5. Distributed Training](#25-distributed-training)
- [2.6. Training with Knowledge Distillation](#26-training-with-knowledge-distillation) - [2.6. Training on other platform(Windows/macOS/Linux DCU)](#26-training-on-other-platformwindowsmacoslinux-dcu)
- [2.7. Training on other platform(Windows/macOS/Linux DCU)](#27-training-on-other-platformwindowsmacoslinux-dcu) - [2.7. Fine-tuning](#27-fine-tuning)
- [2.8 Fine-tuning](#28-fine-tuning)
- [3. Evaluation and Test](#3-evaluation-and-test) - [3. Evaluation and Test](#3-evaluation-and-test)
- [3.1. Evaluation](#31-evaluation) - [3.1. Evaluation](#31-evaluation)
- [3.2. Test table structure recognition effect](#32-test-table-structure-recognition-effect) - [3.2. Test table structure recognition effect](#32-test-table-structure-recognition-effect)
...@@ -211,11 +210,7 @@ python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1 ...@@ -211,11 +210,7 @@ python3 -m paddle.distributed.launch --ips="xx.xx.xx.xx,xx.xx.xx.xx" --gpus '0,1
**Note:** (1) When using multi-machine and multi-gpu training, you need to replace the ips value in the above command with the address of your machine, and the machines need to be able to ping each other. (2) Training needs to be launched separately on multiple machines. The command to view the ip address of the machine is `ifconfig`. (3) For more details about the distributed training speedup ratio, please refer to [Distributed Training Tutorial](./distributed_training_en.md). **Note:** (1) When using multi-machine and multi-gpu training, you need to replace the ips value in the above command with the address of your machine, and the machines need to be able to ping each other. (2) Training needs to be launched separately on multiple machines. The command to view the ip address of the machine is `ifconfig`. (3) For more details about the distributed training speedup ratio, please refer to [Distributed Training Tutorial](./distributed_training_en.md).
## 2.6. Training with Knowledge Distillation ## 2.6. Training on other platform(Windows/macOS/Linux DCU)
coming soon!
## 2.7. Training on other platform(Windows/macOS/Linux DCU)
- Windows GPU/CPU - Windows GPU/CPU
The Windows platform is slightly different from the Linux platform: The Windows platform is slightly different from the Linux platform:
...@@ -229,7 +224,7 @@ GPU mode is not supported, you need to set `use_gpu` to False in the configurati ...@@ -229,7 +224,7 @@ GPU mode is not supported, you need to set `use_gpu` to False in the configurati
Running on a DCU device requires setting the environment variable `export HIP_VISIBLE_DEVICES=0,1,2,3`, and the rest of the training and evaluation prediction commands are exactly the same as the Linux GPU. Running on a DCU device requires setting the environment variable `export HIP_VISIBLE_DEVICES=0,1,2,3`, and the rest of the training and evaluation prediction commands are exactly the same as the Linux GPU.
## 2.8 Fine-tuning ## 2.7. Fine-tuning
In the actual use process, it is recommended to load the officially provided pre-training model and fine-tune it in your own data set. For the fine-tuning method of the table recognition model, please refer to: [Model fine-tuning tutorial](./finetune.md). In the actual use process, it is recommended to load the officially provided pre-training model and fine-tune it in your own data set. For the fine-tuning method of the table recognition model, please refer to: [Model fine-tuning tutorial](./finetune.md).
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册