Created by: nepeplwu
support show paddle model,user can use paddle.fluid.io.save_inference_model() to save their model to <SAVE_MODEL_PATH>, and then use visualdl --model_pb <SAVE_MODEL_PATH> to load the model