@@ -191,14 +191,14 @@ For more implementation details, see following demos:
-[Tagging]()
-[SQuAD machine Reading Comprehension]().
#### saver
#### Set Saver
To save models/checkpoints and logs during training, just call `trainer.set_saver` method. More implementation details see [this]().
#### predict
### Do Prediction
To do predict/evaluation after a training stage, just create another three reader, backbone and head instance with `phase='predict'` (repeat step 1~4 above). Then do predicting with `predict` method in trainer (no need to create another trainer). More implementation details see [this]().
#### multi-task learning
### Multi-task Learning
To run with multi-task learning mode:
1. repeatedly create components (i.e., reader, backbone and head) for each task followed with step 1~5 above.
...
...
@@ -210,7 +210,7 @@ To run with multi-task learning mode:
The save/load and predict operations of a multi_head_trainer is the same as a trainer.
For more implementation details with multi_head_trainer, see
For more implementation details with `multi_head_trainer`, see
-[Joint training of dialogue intent recognition and slot filling for ATIS]()
-[Learning reading comprehension auxilarized with mask language model for MRQA]()(初次发版先不用加)