How to get optimized model for running on NPU
Created by: MARMOTatZJU
From the demo Paddle-Lite-Demo , I found that mobilenet_v1_for_cpu and mobilenet_v1_for_npu have different size, so I suppose that they are not the same model.
However, from the description for model_optimize_tool, I did not get the related information, thus I am not sure whether the optimized model generated by mode_optimize_tool is for CPU or NPU.
More concretely, I am interested in the method to generate the model for running on NPU. Could you provide a solution to this need? Thanks in advance.