@@ -39,16 +39,16 @@ Why we use multiple queues for data reader? a experimental result page needs to
## Main Interface of Async Executor
We have RunFromFiles interface which is an execution interface for users to call. Every time a user calls RunFromFiles, a main_program should be provided and it is running in the global scope previously defined. A list of file names and corresponding Dataset should be provided. Inside the RunFromFiles interface, readers will be created through Dataset configurations. Files will be fed into created readers.
// when the number of files is less than the number of threads
for(inti=0;i<fetch_var_num;++i){
fetch_values_[i]=fetch_values_[i]/batch_cnt;
}
}
```
## How to print variable information during execution
Inside async_executor, no information is printed. Variable can be fetched through an execution of async_executor. The fetched variables can be printed through python.
Inside async_executor, no information is printed. Variable can be fetched through an execution of async_executor. The fetched variables can be printed through python. Since we train several files of instances within async_executor, the fetched variables are not accurate. In this version of design, we only fetch variables of the last iteration for each thread and we average the fetched variables by batch_size * thread_num.
## How to save models
Models can be saved between execution of async_executor through io.save method.
...
...
@@ -144,11 +178,6 @@ Models can be saved between execution of async_executor through io.save method.