Any plans for online model serving?
Created by: yingfeng
Model serving has been provided according to some issues such as this one, however, it's a naive solution for traditional deep learning, which means after the model have been finished training, it will be delivered to a dedicate server for prediction.
However, one important feature for Paddle is sparse learning for advertising and recommendation, in these cases, the model might be huge that distributed lookup table is inevitable ,additionally, the model is kept updating continuously, it means the parameters and weights should be replicated to the prediction server all the way. Does there exist any plan for such kinds of situation to support online prediction for business?