- 12 4月, 2022 1 次提交
-
-
由 JingZhuangzhuang 提交于
* add python share_data interface * Update inference_api.cc * Update inference_api.cc * add python share_data interface
-
- 20 10月, 2021 1 次提交
-
-
由 Steffy-zxf 提交于
Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent. * support the text string as an input Tensor * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization. * It first applies basic tokenization, followed by wordpiece tokenization.
-
- 19 10月, 2021 1 次提交
-
-
由 Wilber 提交于
* update * fix ut error * update ut
-
- 07 9月, 2020 1 次提交
-
-
由 Wilber 提交于
-