1. 17 8月, 2022 1 次提交
    • N
      [CodeStyle][NPU] use np.testing.assert_allclose instead of... · 2de0d676
      Nyakku Shigure 提交于
      [CodeStyle][NPU] use np.testing.assert_allclose instead of self.assertTrue(np.allclose(...)) (part 1) (#44988)
      
      * autofix
      
      * try resolve precision issues
      
      * revert some changes
      
      * clean some `err_msg`
      
      * 0.0001 -> 1e-4
      
      * update commented assert code
      
      * try to fix some shape errors
      
      * `numpy` -> `np`
      
      * empty commit, trigger kunlun ci, test=kunlun
      
      * empty commit, retrigger kunlun ci, test=kunlun
      
      * empty commit, trigger kunlun ci, try fix npu memcpy_h2d, test=kunlun
      
      * try fix npu import error, test=kunlun
      2de0d676
  2. 13 6月, 2022 1 次提交
  3. 05 6月, 2022 1 次提交
    • S
      【code format check upgrade】 step2:yapf (#42944) · a072fca8
      Sing_chan 提交于
      * use yapf to format all python file
      
      * yapf exclude two unittests file for they rely on writing and reading file, and format will break them
      
      * disable diff_py_file because too many diff files cause command following failed
      a072fca8
  4. 17 5月, 2022 1 次提交
  5. 24 4月, 2022 1 次提交
  6. 25 3月, 2022 1 次提交
    • J
      Refactor Dygraph Flags (#40786) · 3085d5e4
      Jiabin Yang 提交于
      * refactor eager flags
      
      * fix flags error when we switch from eager to dygraph
      
      * fix ci problem
      
      * fix ci
      
      * fix ci
      
      * merge develop and fix code style
      
      * merge develop and fix code style
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * fix op test error
      
      * merge develop
      3085d5e4
  7. 22 12月, 2021 1 次提交
  8. 20 10月, 2021 1 次提交
    • S
      Add FasterTokenizer Operator (#34491) · 3f2d6a3f
      Steffy-zxf 提交于
      Add Tokenizer related functionalities for Transformer model in order that the process of training and predicting is consistent.
      
      * support the text string as an input Tensor
      * support the "VOCAB"unordered_map<wstring, int> as an input Tensor to lookup tokens
      * Tokenizer used for BERT. This tokenizer applies an end-to-end, text string to wordpiece tokenization.
      * It first applies basic tokenization, followed by wordpiece tokenization.
      3f2d6a3f