program=main_program,# make sure this can be changed during iteration
reader=dataset,# make sure this can be changed during iteration
filelist=filelist,# this can be changed during iteration
thread=thread_num,# make sure this can be changed during iteration
fetch=[acc])# fetch can be done with python, but the scope should be exposed
print("accuracy %f"%acc_val)
executor.save_model(infer_prog,"epoch%d.model"%i)
# todo:
# inference to be added, should loadup a program and a global scope
# thread_num = len(filelist)
thread_num=1
acc_val=async_executor.run(
fluid.default_main_program(),# make sure this can be changed during iteration
dataset,# make sure this can be changed during iteration
filelist,# this can be changed during iteration
thread_num,# make sure this can be changed during iteration
[acc])# fetch can be done with python, but the scope should be exposed
forvalinacc_val:
print("accuracy %f"%val)
```
## Difference between async_executor and other executors
async_executor is mainly designed for cpu training scenarios where data throughputs are high and the computation part of training is not intensive compared with GPU trained models such as resnet-50. Since data throughputs ability is very important in async_executor, we have to design very fast data IO modules to handle very large scale data reading. Another different key aspect is that memory is not a problem in cpu training scenarios given 128G or 256G RAW in modern server.