1. 26 3月, 2020 2 次提交
  2. 13 3月, 2020 1 次提交
  3. 28 2月, 2020 1 次提交
    • R
      Remove passing experimental_run_tf_function in most tests. · b7014702
      Reed Wanderman-Milne 提交于
      The experimental_run_tf_function parameter no longer has any effect.
      
      I didn't remove the functionality from testing_util.py and keras_parameterized.py to run with experimental_run_tf_function being True and False. I will remove that functionality in a future change.
      
      PiperOrigin-RevId: 297674422
      Change-Id: I5b1e67f78b4c3b60242241fb4dc2018f0ace6013
      b7014702
  4. 22 2月, 2020 2 次提交
    • A
      Makes Model compile run inside the captured distribution strategy, so that any... · ef5d540a
      A. Unique TensorFlower 提交于
      Makes Model compile run inside the captured distribution strategy, so that any metrics/optimizers created by compile are created in the distribution strategy scope (e.g. when deserializing strings that are the metric names).
      
      Also adds a correctness test that verifies the model successfully captures the distribution strategy.
      
      It also raises an error if there are metrics that are created outside the scope and not in compile.
      
      So e.g. this will help the following case because the optimizer/metrics get created by compile:
      with strategy.scope():
        model = ...
      model.compile(optimizer='sgd', metrics=['binary_accuracy'])
      
      And, it will raise an error in the following case because the metrics are created in a different distribution strategy scope than the model:
      with strategy.scope():
        model = ...
      model.compile(optimizer=tf.keras.optimizers.Blah() metrics=[tf.keras.metrics.BinaryAccuracy()])
      
      PiperOrigin-RevId: 296553610
      Change-Id: I988a80c1863de732da45555c444fc5a237ecd425
      ef5d540a
    • H
      Limit strategy combinations in test · 23a0ca34
      Håkon Sandsmark 提交于
      Some of the GPU strategies failed earlier with another error message.
      23a0ca34
  5. 21 2月, 2020 1 次提交
    • H
      optimizer_v2: Improve error when called in cross-replica context · 5e15d37d
      Håkon Sandsmark 提交于
      When calling `Optimizer.apply_gradients()` in a cross-replica distribution
      context (with a non-default distribution strategy),
      `distribute_ctx.get_replica_context()` returns None, so it would fail with
      the error
      
          [...]/optimizer_v2.py", line 448, in apply_gradients
              return distribute_ctx.get_replica_context().merge_call(
          AttributeError: 'NoneType' object has no attribute 'merge_call'
      
      This commit changes the error to a `RuntimeError` with a more descriptive
      error message (inspired by the error message in the v1 optimizer) guiding
      the user how to fix the issue, by either calling the `_distributed_apply()`
      function instead or by using `tf.distribute.Strategy.experimental_run_v2`.
      5e15d37d
  6. 20 2月, 2020 1 次提交
    • T
      Keras ideal fit and compile. · 10666c59
      Thomas O'Malley 提交于
      Kept all new abstractions private for now. In a few weeks, if we're
      comfortable that these abstractions are working and stable, we should expose
      many of them publicly.
      
      Capabilites added by this CL:
      
      (1) Easy to create a custom training step via overriding Model._train_step
      (2) Easy to create custom tf.function / DistStrat logic via overriding
      Model._make_train_function
      (3) Advanced users can override Model.compile and Model.fit
      (4) Full support for dicts, nested structures, etc with Subclassed Models.
      (5) "Power user" path (tf.data inputs) only modifies data in Model._train_step,
      where this behavior is easy to override and disable. This applies even to
      Keras's assumption that data is passed in (x, y, sample_weight) format.
      
      Behavior changes:
      
      (1) "loss" passed to Callbacks is now stateful (like all other metrics in
      Callbacks). This greatly simplifies the training step logic and callback logic.
      (2) ProgbarLogger always uses steps. If steps is not available, the
      ProgbarLogger handles inferring the steps after the first epoch.
      (3) validation_batch_size added in `fit`, rather than inferring from generator.
      (4) Model.inputs, Model.outputs, Model.input_names, and Model.output_names are
      no longer populated for subclassed Models. Instead, "pseudo" output names are
      created for subclassed Models, which are only used for metrics names and
      SavedModel's signature.
      (5) Cast NumPy floats to backend.floatx(), otherwise leave
      unchanged (this is likely not a change, we did something like this in our old
      version but the logic was scattered in many places)
      
      PiperOrigin-RevId: 296090972
      Change-Id: Ia5ac833fd39085bddb016833bd338083d0dc5fc2
      10666c59
  7. 15 2月, 2020 1 次提交
  8. 25 1月, 2020 1 次提交
  9. 29 10月, 2019 1 次提交
    • R
      Fix incorrect RNN gradients with TPUStrategy and mixed precision. · 159d5f6d
      Reed Wanderman-Milne 提交于
      The core issue is that if you call Variable.value() when a TPUStrategy is used, gradients with respect to that Variable are None (I'm not sure why). All the operator overloads in AutoCastVariable used Variable.value(), so they were broken with TPUStrategy. I fixed this by calling Variable.read_value() instead. This fixes any case where an AutoCastVariable operator is used, such as most RNNs.
      
      PiperOrigin-RevId: 277176725
      Change-Id: I43e8abcd69f99708d47ec5b3b82b67bab4494db1
      159d5f6d
  10. 08 10月, 2019 1 次提交
    • S
      Remove the __init__.py content for keras/utils. · cab28a0b
      Scott Zhu 提交于
      1. Change all imports that uses keras.utils to be explicit import of individual module.
      2. Removed deprecated util imports in keras_preprocessing.
      3. Moved all the public symbol from __init__.py to all_utils.py, which is used by keras/application for injection.
      
      PiperOrigin-RevId: 273327600
      cab28a0b
  11. 26 9月, 2019 1 次提交
    • R
      Support TPUStrategy in the mixed precision API. · 4a7687de
      Reed Wanderman-Milne 提交于
      This is done by creating a dynamic subclass of AutoCastVariable when it wraps a DistributedVariable subclass. Now, when wrapping a variable that subclasses from DistributedVariable, a dynamic class subclassing both AutoCastVariable and variable.__class__ is created. That way, the AutoCastVariable will still pass isinstance(auto_cast_variable, variable.__class__) checks. This allows AutoCastVariables to work with TPUStrategy, which extensively uses isinstance.
      
      Alternatives considered:
      1. Remove all isinstance checks of distributed values from distribution strategy. This is difficult, and people could add isinstance checks in the future.
      
      2. Adding a metaclass to DistributedValues, and overriding __instancecheck__. Or using an ABCMeta metaclass to register a custom subclass. The issue is using metaclasses is complicated and I'd rather avoid it.
      
      3. Using a variable_creator_scope with a lowered priority to have DistributedVariable wrap AutoCastVariables, instead of having AutoCastVariables wrap DistributedVariables. This requires modifications to DistributedVariables and requires the user a private TF API in Keras, so I did not go with this approach.
      
      PiperOrigin-RevId: 271237386
      4a7687de
  12. 11 9月, 2019 1 次提交
  13. 22 8月, 2019 1 次提交
  14. 21 8月, 2019 1 次提交
  15. 13 8月, 2019 1 次提交
    • P
      Re-enable flaky TPU tests · 0a27dbf2
      Philip Pham 提交于
      Made one test deterministic by setting the random seed. Increased the shard
      count to deal with timeouts.
      
      PiperOrigin-RevId: 263041722
      0a27dbf2
  16. 31 7月, 2019 1 次提交
  17. 30 7月, 2019 2 次提交
  18. 27 7月, 2019 2 次提交
  19. 22 7月, 2019 1 次提交
  20. 20 7月, 2019 2 次提交
    • A
      Use a different CancellationManager for every execution of function/op · 606da502
      Akshay Modi 提交于
      The newly added benchmark shows about a 100ns loss on my machine.
      
      While not ideal, I believe the added time is justifiable since the loss is only in the op-by-op execution path (when a TF_CancellationManager is not active), and not on the function execution path (which has taken a back seat in the past). The previous behavior was also broken, so continuing to support it seems non-ideal. If this becomes a bottleneck in the future, we can probably explore ways of making it faster.
      
      PiperOrigin-RevId: 259059904
      606da502
    • I
      Skip the test that triggers the issue # 137776821 in Eager mode rather than... · b63c4b28
      Igor Saprykin 提交于
      Skip the test that triggers the issue # 137776821 in Eager mode rather than when run_distributed=True, because it still fails.
      
      PiperOrigin-RevId: 259020743
      b63c4b28
  21. 19 7月, 2019 3 次提交
    • I
      Test run_distributed=True path in Keras dist-strat tests. · 022fa6c7
      Igor Saprykin 提交于
      This effectively replaces the cloning=False tests.  The cloning argument doesn't have an effect anymore and is completely removed in the change # 258789323.
      
      Mechanically this change is done by recursive rename of "cloning" to "run_distributed" in _test.py files followed by pyformatting the changed lines.
      
      PiperOrigin-RevId: 258864940
      022fa6c7
    • P
      Retrace Keras history in `add_loss` and `add_metric` · a3777018
      Philip Pham 提交于
      In functional graph network models, custom losses and metrics can end up
      disconnected from the network. We can retrace the graph and insert the
      additional ancillary layers to support this use case.
      
      In this way, the user can use custom losses and metrics computed by layers.
      
      Fixes #30378.
      
      PiperOrigin-RevId: 258843847
      a3777018
    • P
      Automated rollback of commit 11b33344. Revert #30053. · e5cf807e
      Pavithra Vijay 提交于
      PiperOrigin-RevId: 258813889
      e5cf807e
  22. 18 7月, 2019 1 次提交
    • I
      Don't distribute Keras models using cloning in Eager mode. · 1129d27c
      Igor Saprykin 提交于
      This behavior isn't going to be available with the `compile(cloning=...)` flag.
      
      The tests had to updated for this change for a few reasons: 1) Cloning and no-cloning supports different features, so the expected thrown exceptions are different.  2) TFOptimizer-wrapped optimizers don't work due to internal bug issue # 130808953.
      
      PiperOrigin-RevId: 258652546
      1129d27c
  23. 17 7月, 2019 1 次提交
  24. 11 7月, 2019 1 次提交
  25. 05 7月, 2019 1 次提交
  26. 28 6月, 2019 3 次提交
  27. 25 6月, 2019 1 次提交
  28. 18 6月, 2019 2 次提交
  29. 12 6月, 2019 1 次提交
  30. 05 6月, 2019 1 次提交