program=main_program,# make sure this can be changed during iteration
paddle.async_executor(feed_list=[slots,label],
reader=dataset,# make sure this can be changed during iteration
startup_program=startup_program,
filelist=filelist,# this can be changed during iteration
main_program=main_program,
thread=thread_num,# make sure this can be changed during iteration
fetch_list=[abs_output_var],
fetch=[acc])# how to define fetch, and what kind of things to return here
fetch_iter=10)
print("accuracy %f"%acc_val)
# do something on fetch list
executor.save_model(infer_prog,"epoch%d.model"%i)
# todo:
# inference to be added, should loadup a program and a global scope
```
```
## Difference between async_executor and other executors
## Difference between async_executor and other executors
async_executor is mainly designed for cpu training scenarios where data throughputs are high and the computation part of training is not intensive compared with GPU trained models such as resnet-50. Since data throughputs ability is very important in async_executor, we have to design very fast data IO modules to handle very large scale data reading. Another different key aspect is that memory is not a problem in cpu training scenarios given 128G or 256G RAW in modern server.
async_executor is mainly designed for cpu training scenarios where data throughputs are high and the computation part of training is not intensive compared with GPU trained models such as resnet-50. Since data throughputs ability is very important in async_executor, we have to design very fast data IO modules to handle very large scale data reading. Another different key aspect is that memory is not a problem in cpu training scenarios given 128G or 256G RAW in modern server.