program=main_program,# make sure this can be changed during iteration
reader=dataset,# make sure this can be changed during iteration
filelist=filelist,# this can be changed during iteration
thread=thread_num,# make sure this can be changed during iteration
fetch=[acc])# how to define fetch, and what kind of things to return here
print("accuracy %f"%acc_val)
executor.save_model(infer_prog,"epoch%d.model"%i)
# todo:
# inference to be added, should loadup a program and a global scope
```
## Difference between async_executor and other executors
async_executor is mainly designed for cpu training scenarios where data throughputs are high and the computation part of training is not intensive compared with GPU trained models such as resnet-50. Since data throughputs ability is very important in async_executor, we have to design very fast data IO modules to handle very large scale data reading. Another different key aspect is that memory is not a problem in cpu training scenarios given 128G or 256G RAW in modern server.