1. 10 3月, 2022 1 次提交
    • H
      Inference add ONNXRuntime back-end (#39988) · 431afc39
      heliqi 提交于
      * add onnxruntime predictor
      
      * Add code comments
      
      * support link paddle2onnx onnxruntime
      
      * support onnxruntime with python
      
      * support onnxruntime with python
      
      * support onnxruntime with windows
      
      * paddle2onnx compile with windows
      
      * supoort windows compile
      
      * supoort windows compile with onnxruntime
      
      * supoort windows compile with paddle2onnx
      
      * supoort mac compile
      
      * compile with mac
      
      * compile with mac
      
      * add code comments
      
      * fix remind word
      
      * code optimization
      
      * add test case
      
      * add test case
      
      * add inference demo_ci test case
      
      * fix compile paddle2onnx with no python
      
      * add inference demo_ci test case
      
      * add inference demo_ci test case
      
      * add inference infer_ut test case
      
      * support c go api and test cases
      
      * add converage test case
      
      * add converage test case
      
      * add capi test case
      
      * add capi test case
      431afc39
  2. 13 1月, 2022 1 次提交
  3. 20 10月, 2021 1 次提交
    • S
      Add FasterTokenizer Operator (#34491) · 3f2d6a3f
      Steffy-zxf 提交于
      Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent.
      
      * support the text string as an input Tensor
      * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens
      * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization.
      * It first applies basic tokenization, followed by wordpiece tokenization.
      3f2d6a3f
  4. 14 7月, 2021 1 次提交
  5. 05 3月, 2021 1 次提交
  6. 03 2月, 2021 1 次提交
  7. 13 1月, 2021 1 次提交
  8. 04 1月, 2021 1 次提交
  9. 21 12月, 2020 1 次提交
  10. 07 12月, 2020 1 次提交
  11. 11 11月, 2020 1 次提交
  12. 28 8月, 2020 1 次提交
  13. 14 7月, 2020 1 次提交
  14. 02 7月, 2020 1 次提交
  15. 10 6月, 2020 1 次提交
  16. 05 6月, 2020 1 次提交
  17. 01 6月, 2020 1 次提交
    • S
      support C++ inference shared library on windows (#24672) · 126d3d69
      silingtong123 提交于
      * add SetCommandLineOption
      
      * add the print_FLAGS function
      
      * remove the test demo
      
      * modify the location of macro
      
      * add the 'WITH_STATIC_LIB' option on windows
      
      * modify the macro of PD_INFER_DECL
      
      * modify the the fuction name
      
      * modify the unittest
      
      * modify the code style
      126d3d69
  18. 05 4月, 2020 1 次提交
  19. 03 4月, 2020 1 次提交
  20. 04 2月, 2020 1 次提交
  21. 17 9月, 2019 1 次提交
  22. 26 3月, 2019 1 次提交
  23. 20 3月, 2019 1 次提交
    • N
      cherry-pick from feature/anakin-engine: add data type for zero copy #16313 · 4f4daa4b
      nhzlx 提交于
      1. refine anakin engine
      2. add data type for zero copy
      
      align dev branch and PaddlePaddle:feature/anakin-engine brach
      the cudnn workspace modify was not included for now, because we use a hard code way
      in feature/anakin-engine branch. There should be a better way to implement it,
      and subsequent submissions will be made.
      
      test=develop
      4f4daa4b
  24. 08 3月, 2019 2 次提交
  25. 07 3月, 2019 1 次提交
  26. 01 3月, 2019 1 次提交
  27. 21 2月, 2019 1 次提交
  28. 14 2月, 2019 1 次提交
  29. 29 1月, 2019 1 次提交
  30. 28 1月, 2019 2 次提交
  31. 18 1月, 2019 1 次提交
  32. 17 1月, 2019 1 次提交
  33. 09 1月, 2019 1 次提交
  34. 08 1月, 2019 1 次提交
  35. 23 11月, 2018 2 次提交
  36. 14 11月, 2018 1 次提交
  37. 24 10月, 2018 1 次提交