- 26 3月, 2020 2 次提交
-
-
由 Reed Wanderman-Milne 提交于
I first submitted this in 3931d393 but was rolled back since Nones were filtered out from the gradients, but not the variables. I now add Nones back to the gradients so they properly match up. PiperOrigin-RevId: 302107549 Change-Id: I81b7fb71c9cdaa458475d83f784366ce8405fb74
-
由 Ran Chen 提交于
CentralStorageStrategy PiperOrigin-RevId: 302804311 Change-Id: Ibb27c529251390f40338cd296537cd98f8940b56
-
- 13 3月, 2020 1 次提交
-
-
由 Ken Franko 提交于
- experimental_run_v2 -> run PiperOrigin-RevId: 300574367 Change-Id: I5d82ea5450a4d32aea6d05ed3db4f02b8edb2eea
-
- 28 2月, 2020 1 次提交
-
-
由 Reed Wanderman-Milne 提交于
The experimental_run_tf_function parameter no longer has any effect. I didn't remove the functionality from testing_util.py and keras_parameterized.py to run with experimental_run_tf_function being True and False. I will remove that functionality in a future change. PiperOrigin-RevId: 297674422 Change-Id: I5b1e67f78b4c3b60242241fb4dc2018f0ace6013
-
- 22 2月, 2020 2 次提交
-
-
由 A. Unique TensorFlower 提交于
Makes Model compile run inside the captured distribution strategy, so that any metrics/optimizers created by compile are created in the distribution strategy scope (e.g. when deserializing strings that are the metric names). Also adds a correctness test that verifies the model successfully captures the distribution strategy. It also raises an error if there are metrics that are created outside the scope and not in compile. So e.g. this will help the following case because the optimizer/metrics get created by compile: with strategy.scope(): model = ... model.compile(optimizer='sgd', metrics=['binary_accuracy']) And, it will raise an error in the following case because the metrics are created in a different distribution strategy scope than the model: with strategy.scope(): model = ... model.compile(optimizer=tf.keras.optimizers.Blah() metrics=[tf.keras.metrics.BinaryAccuracy()]) PiperOrigin-RevId: 296553610 Change-Id: I988a80c1863de732da45555c444fc5a237ecd425
-
由 Håkon Sandsmark 提交于
Some of the GPU strategies failed earlier with another error message.
-
- 21 2月, 2020 1 次提交
-
-
由 Håkon Sandsmark 提交于
When calling `Optimizer.apply_gradients()` in a cross-replica distribution context (with a non-default distribution strategy), `distribute_ctx.get_replica_context()` returns None, so it would fail with the error [...]/optimizer_v2.py", line 448, in apply_gradients return distribute_ctx.get_replica_context().merge_call( AttributeError: 'NoneType' object has no attribute 'merge_call' This commit changes the error to a `RuntimeError` with a more descriptive error message (inspired by the error message in the v1 optimizer) guiding the user how to fix the issue, by either calling the `_distributed_apply()` function instead or by using `tf.distribute.Strategy.experimental_run_v2`.
-
- 20 2月, 2020 1 次提交
-
-
由 Thomas O'Malley 提交于
Kept all new abstractions private for now. In a few weeks, if we're comfortable that these abstractions are working and stable, we should expose many of them publicly. Capabilites added by this CL: (1) Easy to create a custom training step via overriding Model._train_step (2) Easy to create custom tf.function / DistStrat logic via overriding Model._make_train_function (3) Advanced users can override Model.compile and Model.fit (4) Full support for dicts, nested structures, etc with Subclassed Models. (5) "Power user" path (tf.data inputs) only modifies data in Model._train_step, where this behavior is easy to override and disable. This applies even to Keras's assumption that data is passed in (x, y, sample_weight) format. Behavior changes: (1) "loss" passed to Callbacks is now stateful (like all other metrics in Callbacks). This greatly simplifies the training step logic and callback logic. (2) ProgbarLogger always uses steps. If steps is not available, the ProgbarLogger handles inferring the steps after the first epoch. (3) validation_batch_size added in `fit`, rather than inferring from generator. (4) Model.inputs, Model.outputs, Model.input_names, and Model.output_names are no longer populated for subclassed Models. Instead, "pseudo" output names are created for subclassed Models, which are only used for metrics names and SavedModel's signature. (5) Cast NumPy floats to backend.floatx(), otherwise leave unchanged (this is likely not a change, we did something like this in our old version but the logic was scattered in many places) PiperOrigin-RevId: 296090972 Change-Id: Ia5ac833fd39085bddb016833bd338083d0dc5fc2
-
- 15 2月, 2020 1 次提交
-
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 295171678 Change-Id: Ia93dd94ac2f28bb2bdc1e346961e1d138e3ba4db
-
- 25 1月, 2020 1 次提交
-
-
由 Reed Wanderman-Milne 提交于
PiperOrigin-RevId: 291514265 Change-Id: I0540d980fdf0cb6222273ff48f6f72223ba90ff2
-
- 29 10月, 2019 1 次提交
-
-
由 Reed Wanderman-Milne 提交于
The core issue is that if you call Variable.value() when a TPUStrategy is used, gradients with respect to that Variable are None (I'm not sure why). All the operator overloads in AutoCastVariable used Variable.value(), so they were broken with TPUStrategy. I fixed this by calling Variable.read_value() instead. This fixes any case where an AutoCastVariable operator is used, such as most RNNs. PiperOrigin-RevId: 277176725 Change-Id: I43e8abcd69f99708d47ec5b3b82b67bab4494db1
-
- 08 10月, 2019 1 次提交
-
-
由 Scott Zhu 提交于
1. Change all imports that uses keras.utils to be explicit import of individual module. 2. Removed deprecated util imports in keras_preprocessing. 3. Moved all the public symbol from __init__.py to all_utils.py, which is used by keras/application for injection. PiperOrigin-RevId: 273327600
-
- 26 9月, 2019 1 次提交
-
-
由 Reed Wanderman-Milne 提交于
This is done by creating a dynamic subclass of AutoCastVariable when it wraps a DistributedVariable subclass. Now, when wrapping a variable that subclasses from DistributedVariable, a dynamic class subclassing both AutoCastVariable and variable.__class__ is created. That way, the AutoCastVariable will still pass isinstance(auto_cast_variable, variable.__class__) checks. This allows AutoCastVariables to work with TPUStrategy, which extensively uses isinstance. Alternatives considered: 1. Remove all isinstance checks of distributed values from distribution strategy. This is difficult, and people could add isinstance checks in the future. 2. Adding a metaclass to DistributedValues, and overriding __instancecheck__. Or using an ABCMeta metaclass to register a custom subclass. The issue is using metaclasses is complicated and I'd rather avoid it. 3. Using a variable_creator_scope with a lowered priority to have DistributedVariable wrap AutoCastVariables, instead of having AutoCastVariables wrap DistributedVariables. This requires modifications to DistributedVariables and requires the user a private TF API in Keras, so I did not go with this approach. PiperOrigin-RevId: 271237386
-
- 11 9月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 268391506
-
- 22 8月, 2019 1 次提交
-
-
由 Igor Saprykin 提交于
PiperOrigin-RevId: 264693076
-
- 21 8月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
1. Fix incorrect steps inference when validation_split is provided in fit in v2 single path execution. 2. Infer validation_steps/steps for a dataset when validation_steps/steps is not provided in fit/(evaluate and predict). PiperOrigin-RevId: 264508014
-
- 13 8月, 2019 1 次提交
-
-
由 Philip Pham 提交于
Made one test deterministic by setting the random seed. Increased the shard count to deal with timeouts. PiperOrigin-RevId: 263041722
-
- 31 7月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
Renames the `run_distributed` flag to `experimental_run_tf_function` and modifies the tests generated by the keras_all_modes decorator to better reflect that. PiperOrigin-RevId: 260767064
-
- 30 7月, 2019 2 次提交
-
-
由 Thomas O'Malley 提交于
path. PiperOrigin-RevId: 260719633
-
由 Priya Gupta 提交于
PiperOrigin-RevId: 260619319
-
- 27 7月, 2019 2 次提交
-
-
由 Philip Pham 提交于
Current logic filters layers and inserts new ones collecting all their nodes. Two cases weren't being handled: (1) the nodes of reused layers aren't processed, and moreover, (2) nodes were not being filtered, so unkept (from graph network layers) or unused nodes were being added from new layers. I also took the liberty of removing the `_layers_by_depth` attribute. It's not being used anywhere and is difficult to synchronize as layers are reused. PiperOrigin-RevId: 260211815
-
由 Priya Gupta 提交于
PiperOrigin-RevId: 260180753
-
- 22 7月, 2019 1 次提交
-
-
由 Rachel Lim 提交于
[tf.data] Update rebatching to use a "fallback" method when it can't find a Batch dataset. Also fixes some edge cases. PiperOrigin-RevId: 259322208
-
- 20 7月, 2019 2 次提交
-
-
由 Akshay Modi 提交于
The newly added benchmark shows about a 100ns loss on my machine. While not ideal, I believe the added time is justifiable since the loss is only in the op-by-op execution path (when a TF_CancellationManager is not active), and not on the function execution path (which has taken a back seat in the past). The previous behavior was also broken, so continuing to support it seems non-ideal. If this becomes a bottleneck in the future, we can probably explore ways of making it faster. PiperOrigin-RevId: 259059904
-
由 Igor Saprykin 提交于
Skip the test that triggers the issue # 137776821 in Eager mode rather than when run_distributed=True, because it still fails. PiperOrigin-RevId: 259020743
-
- 19 7月, 2019 3 次提交
-
-
由 Igor Saprykin 提交于
This effectively replaces the cloning=False tests. The cloning argument doesn't have an effect anymore and is completely removed in the change # 258789323. Mechanically this change is done by recursive rename of "cloning" to "run_distributed" in _test.py files followed by pyformatting the changed lines. PiperOrigin-RevId: 258864940
-
由 Philip Pham 提交于
In functional graph network models, custom losses and metrics can end up disconnected from the network. We can retrace the graph and insert the additional ancillary layers to support this use case. In this way, the user can use custom losses and metrics computed by layers. Fixes #30378. PiperOrigin-RevId: 258843847
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258813889
-
- 18 7月, 2019 1 次提交
-
-
由 Igor Saprykin 提交于
This behavior isn't going to be available with the `compile(cloning=...)` flag. The tests had to updated for this change for a few reasons: 1) Cloning and no-cloning supports different features, so the expected thrown exceptions are different. 2) TFOptimizer-wrapped optimizers don't work due to internal bug issue # 130808953. PiperOrigin-RevId: 258652546
-
- 17 7月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258438818
-
- 11 7月, 2019 1 次提交
-
-
由 Yong Tang 提交于
Signed-off-by: NYong Tang <yong.tang.github@outlook.com>
-
- 05 7月, 2019 1 次提交
-
-
由 Taehoon Lee 提交于
-
- 28 6月, 2019 3 次提交
-
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 255504802
-
由 Mihai Maruseac 提交于
PiperOrigin-RevId: 255443249
-
由 Pavithra Vijay 提交于
Make `model._eager_losses` property thread local so that it works correctly with mirrored strategy threads. PiperOrigin-RevId: 255419308
-
- 25 6月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
- we raise an error in eager so ideally we want to raise an error in function mode also. But raising a warning for now so as to not break all existing usages. PiperOrigin-RevId: 254817010
-
- 18 6月, 2019 2 次提交
-
-
由 Igor Saprykin 提交于
Extra input confuses the logic that unpacks (x, y, sample_weights) and x remains tuple that then later confused the validation logic. In the original reported issue, the data pipeline has "return input, {}" which triggers this condition. If the original code that currently has the issue only returns input rather than "input, {}" then the issue doesn't happen. PiperOrigin-RevId: 253609807
-
由 Igor Saprykin 提交于
PiperOrigin-RevId: 253594910
-
- 12 6月, 2019 1 次提交
-
-
由 A. Unique TensorFlower 提交于
they have been moved to the Estimator repo. PiperOrigin-RevId: 252688853
-
- 05 6月, 2019 1 次提交
-
-
由 Sourabh Bajaj 提交于
Support model.fit and evaluate in 2.0 with TPUStrategy using the experimental_run + train_on_batch API. PiperOrigin-RevId: 251570029
-