- 26 3月, 2020 3 次提交
-
-
由 Reed Wanderman-Milne 提交于
I first submitted this in 3931d393 but was rolled back since Nones were filtered out from the gradients, but not the variables. I now add Nones back to the gradients so they properly match up. PiperOrigin-RevId: 302107549 Change-Id: I81b7fb71c9cdaa458475d83f784366ce8405fb74
-
由 Ran Chen 提交于
CentralStorageStrategy PiperOrigin-RevId: 302804311 Change-Id: Ibb27c529251390f40338cd296537cd98f8940b56
-
由 Ran Chen 提交于
For some strategies we don't do all reduce, so all_reduce_sum_gradients can be misleading. The parameter is also changed to experimental because of issues with CentralStorageStrategy. PiperOrigin-RevId: 302734837 Change-Id: Ic30e2f81ab61eef568ee68e5752015f950117d47
-
- 24 3月, 2020 1 次提交
-
-
由 Ken Franko 提交于
PiperOrigin-RevId: 302069884 Change-Id: I32ff43f146c6f60d462d2713908c3cf258ace3de
-
- 02 3月, 2020 1 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 298206172 Change-Id: I814b0a2f71467c8797d9bd009822758d65e034cb
-
- 22 2月, 2020 2 次提交
-
-
由 Ran Chen 提交于
PiperOrigin-RevId: 296576732 Change-Id: Ib3432574cf2d2fd7501e120cf3333fd2bfd51ea3
-
由 Håkon Sandsmark 提交于
Remove suggestion about private method.
-
- 21 2月, 2020 1 次提交
-
-
由 Håkon Sandsmark 提交于
When calling `Optimizer.apply_gradients()` in a cross-replica distribution context (with a non-default distribution strategy), `distribute_ctx.get_replica_context()` returns None, so it would fail with the error [...]/optimizer_v2.py", line 448, in apply_gradients return distribute_ctx.get_replica_context().merge_call( AttributeError: 'NoneType' object has no attribute 'merge_call' This commit changes the error to a `RuntimeError` with a more descriptive error message (inspired by the error message in the v1 optimizer) guiding the user how to fix the issue, by either calling the `_distributed_apply()` function instead or by using `tf.distribute.Strategy.experimental_run_v2`.
-
- 20 2月, 2020 1 次提交
-
-
由 Ran Chen 提交于
This option allows post processing of all reduced gradients, without inheriting from optimizer. PiperOrigin-RevId: 296118658 Change-Id: Ifb6884ec981b06eb70fe5ee9126ab9ac013550e9
-
- 14 2月, 2020 1 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 295048520 Change-Id: I8bca07075d4525f69327460c348017f8400113d2
-
- 12 2月, 2020 2 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 294582830 Change-Id: Ide1b948e3be3fdc7b77f0258b08dad8d4d7c30d4
-
由 A. Unique TensorFlower 提交于
Fix bug where optimizer clipvalue & clipnorm are totally ignored by the Keras training loop. Also raise an error when trying to set clipvalue & clipnorm with a distribution strategy, because global gradient clipping w/ a distribution strategy is not supported just yet. We are working on reworking optimizers to make this possible in a different way. PiperOrigin-RevId: 294575398 Change-Id: I3d1bb69857d4ced857928e7dc83729c315ed00f6
-
- 11 2月, 2020 2 次提交
-
-
由 Kazuaki Ishizaki 提交于
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 294308887 Change-Id: Ieb23e3a1efc9d7235b5998f295de00cafa90f954
-
- 06 1月, 2020 1 次提交
-
-
由 Shreyash Patodia 提交于
-
- 07 12月, 2019 1 次提交
-
-
由 Karmel Allison 提交于
PiperOrigin-RevId: 284266397 Change-Id: I3f5485d16ccb1d4b87508a78760a8b6ba5302cf1
-
- 05 12月, 2019 1 次提交
-
-
由 Taylor Robie 提交于
Refactor the keras optimizer gradient apply function to reuse a context call and skip name scopes in eager mode where they have no effect. This reduces the Python overhead of applying gradient updates in eager mode. PiperOrigin-RevId: 283867294 Change-Id: I8d61428b79d377c3f0ff724a56aaffdb795865ba
-
- 02 11月, 2019 1 次提交
-
-
由 Gaurav Jain 提交于
We do not support complex with certain optimizers such as Ftrl, FtrlV2, AdamWithAmsgrad, AdaMax, AddSign & PowerSign since they may use missing operations on complex values such as sqrt. Fixes #32774 PiperOrigin-RevId: 277953548 Change-Id: Ia075aa5c3f944de932d71b9741d626f7ebe5416f
-
- 03 10月, 2019 1 次提交
-
-
由 Jonathan Hseu 提交于
tf.distribute.Strategy from the original variable. PiperOrigin-RevId: 272498058
-
- 19 9月, 2019 1 次提交
-
-
由 Anudhyan Boral 提交于
PiperOrigin-RevId: 269966781
-
- 10 9月, 2019 1 次提交
-
-
由 Reed Wanderman-Milne 提交于
This fixes an "IndexError: list index out of range" error when OptimizerV2.minimize or OptimizerV2.apply_gradients is called with an empty list of variables under a DistributionStrategy. PiperOrigin-RevId: 268132700
-
- 21 8月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 264402611
-
- 13 8月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 263071623
-
- 09 8月, 2019 2 次提交
-
-
由 HarikrishnanBalagopal 提交于
-
由 HarikrishnanBalagopal 提交于
The original example was processing the gradients twice. 1st: grads_and_vars = zip(processed_grads, var_list) 2nd: capped_grads_and_vars = [(MyCapper(gv[0]), gv[1]) for gv in grads_and_vars] The 2nd line is especially weird because it unnecessarily zips the gradients with the var_list even though it is only processing the gradient part. Refactored the example to be clearer, now there is only a single line that processes the gradients.
-
- 31 7月, 2019 1 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 260734042
-
- 23 7月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 259403870
-
- 18 7月, 2019 1 次提交
-
-
由 Taylor Robie 提交于
Update keras v2 optimizers to reuse coefficients which are shared across all updates, which reduces the total number of ops created by between 5% (for simple optimizers such as SGD and Adagrad) and 25% (for complicated optimizers such as Adam and NAdam). Separate copies are made for each device and dtype. The effect of this change on run time is fairly minimal since Grappler is expected to consolidate most of these ops; however it does improve graph construction time. PiperOrigin-RevId: 258581998
-
- 05 7月, 2019 1 次提交
-
-
由 Håkon Sandsmark 提交于
-
- 13 6月, 2019 2 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 252910734
-
由 Zhenyu Tan 提交于
in graph rewrite. PiperOrigin-RevId: 252856292
-
- 11 6月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 252525789
-
- 10 6月, 2019 1 次提交
-
-
由 Chris Jones 提交于
PiperOrigin-RevId: 252370482
-
- 04 6月, 2019 1 次提交
-
-
由 Katherine Wu 提交于
To save and revive a model: 1. Save the model using tf.saved_model.save 2. call load_from_save_model_v2 This restores various metadata about Keras models and layers, as well as their call and loss functions. Changes to object serialization: - Adds private fields for tracking object's identifier and metadata. - Added _list_extra_dependencies_for_serialization, which allows objects to save extra dependencies when serialized to SavedModel. - Object graph view maintains a serialization cache object that is passed to each object when serializing functions/extra dependencies. PiperOrigin-RevId: 251386039
-
- 22 5月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 249328139
-
- 09 5月, 2019 1 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 247305938
-
- 02 5月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 246199249
-
- 30 4月, 2019 1 次提交
-
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 245793420
-
- 25 4月, 2019 1 次提交
-
-
由 Yanhui Liang 提交于
PiperOrigin-RevId: 245155870
-
- 17 4月, 2019 1 次提交
-
-
由 Pavithra Vijay 提交于
1. Remove all scaling from tf.losses and tf.keras.losses 2. Add appropriate scaling in keras compile/fit for all types of losses and optimizers. 3. For backward compatibility with custom estimator, detect the case of estimator + distribution strategy + optimizer v1 - and scale in the optimizer.compute_gradients in that case (same as 1.13 behavior). Optimizer v2 never does scaling - estimator or not. PiperOrigin-RevId: 243909950
-