1. 26 3月, 2020 3 次提交
  2. 24 3月, 2020 1 次提交
  3. 02 3月, 2020 1 次提交
  4. 22 2月, 2020 2 次提交
  5. 21 2月, 2020 1 次提交
    • H
      optimizer_v2: Improve error when called in cross-replica context · 5e15d37d
      Håkon Sandsmark 提交于
      When calling `Optimizer.apply_gradients()` in a cross-replica distribution
      context (with a non-default distribution strategy),
      `distribute_ctx.get_replica_context()` returns None, so it would fail with
      the error
      
          [...]/optimizer_v2.py", line 448, in apply_gradients
              return distribute_ctx.get_replica_context().merge_call(
          AttributeError: 'NoneType' object has no attribute 'merge_call'
      
      This commit changes the error to a `RuntimeError` with a more descriptive
      error message (inspired by the error message in the v1 optimizer) guiding
      the user how to fix the issue, by either calling the `_distributed_apply()`
      function instead or by using `tf.distribute.Strategy.experimental_run_v2`.
      5e15d37d
  6. 20 2月, 2020 1 次提交
    • R
      Add aggregation to OptimizerV2.apply_gradients · d317cb0b
      Ran Chen 提交于
      This option allows post processing of all reduced gradients, without inheriting from optimizer.
      
      PiperOrigin-RevId: 296118658
      Change-Id: Ifb6884ec981b06eb70fe5ee9126ab9ac013550e9
      d317cb0b
  7. 14 2月, 2020 1 次提交
  8. 12 2月, 2020 2 次提交
  9. 11 2月, 2020 2 次提交
  10. 06 1月, 2020 1 次提交
  11. 07 12月, 2019 1 次提交
  12. 05 12月, 2019 1 次提交
  13. 02 11月, 2019 1 次提交
    • G
      Add complex support to optimizers · bf9c196f
      Gaurav Jain 提交于
      We do not support complex with certain optimizers such as Ftrl, FtrlV2,
      AdamWithAmsgrad, AdaMax, AddSign & PowerSign since they may use missing
      operations on complex values such as sqrt.
      
      Fixes #32774
      
      PiperOrigin-RevId: 277953548
      Change-Id: Ia075aa5c3f944de932d71b9741d626f7ebe5416f
      bf9c196f
  14. 03 10月, 2019 1 次提交
  15. 19 9月, 2019 1 次提交
  16. 10 9月, 2019 1 次提交
  17. 21 8月, 2019 1 次提交
  18. 13 8月, 2019 1 次提交
  19. 09 8月, 2019 2 次提交
    • H
      Change "Do whatever you need" to "Process" · 1485fc58
      HarikrishnanBalagopal 提交于
      1485fc58
    • H
      Changed example showing gradient processing · 31eb0e01
      HarikrishnanBalagopal 提交于
      The original example was processing the gradients twice.
      1st: grads_and_vars = zip(processed_grads, var_list)
      2nd: capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars]
      
      The 2nd line is especially weird because it unnecessarily zips the gradients with the var_list even though it is only processing the gradient part.
      
      Refactored the example to be clearer, now there is only a single line that processes the gradients.
      31eb0e01
  20. 31 7月, 2019 1 次提交
  21. 23 7月, 2019 1 次提交
  22. 18 7月, 2019 1 次提交
    • T
      Update keras v2 optimizers to reuse coefficients which are shared across all... · 498df5d8
      Taylor Robie 提交于
      Update keras v2 optimizers to reuse coefficients which are shared across all updates, which reduces the total number of ops created by between 5% (for simple optimizers such as SGD and Adagrad) and 25% (for complicated optimizers such as Adam and NAdam). Separate copies are made for each device and dtype.
      
      The effect of this change on run time is fairly minimal since Grappler is expected to consolidate most of these ops; however it does improve graph construction time.
      
      PiperOrigin-RevId: 258581998
      498df5d8
  23. 05 7月, 2019 1 次提交
  24. 13 6月, 2019 2 次提交
  25. 11 6月, 2019 1 次提交
  26. 10 6月, 2019 1 次提交
  27. 04 6月, 2019 1 次提交
    • K
      Keras models and layers saving and reviving code. Implements go/tf-model-serialization. · eff4ae82
      Katherine Wu 提交于
      To save and revive a model:
      1. Save the model using tf.saved_model.save
      2. call load_from_save_model_v2
      
      This restores various metadata about Keras models and layers, as well as their call and loss functions.
      
      Changes to object serialization:
      - Adds private fields for tracking object's identifier and metadata.
      - Added _list_extra_dependencies_for_serialization, which allows objects to save extra
        dependencies when serialized to SavedModel.
      - Object graph view maintains a serialization cache object that is passed to each object when serializing functions/extra dependencies.
      
      PiperOrigin-RevId: 251386039
      eff4ae82
  28. 22 5月, 2019 1 次提交
  29. 09 5月, 2019 1 次提交
  30. 02 5月, 2019 1 次提交
  31. 30 4月, 2019 1 次提交
  32. 25 4月, 2019 1 次提交
  33. 17 4月, 2019 1 次提交
    • P
      Improve loss scaling with distribution strategy. · 677a1487
      Pavithra Vijay 提交于
      1. Remove all scaling from tf.losses and tf.keras.losses
      2. Add appropriate scaling in keras compile/fit for all types of losses and optimizers.
      3. For backward compatibility with custom estimator, detect the case of estimator + distribution strategy + optimizer v1 - and scale in the optimizer.compute_gradients in that case (same as 1.13 behavior). Optimizer v2 never does scaling - estimator or not.
      
      PiperOrigin-RevId: 243909950
      677a1487