- 07 6月, 2019 5 次提交
-
-
由 Goldie Gadde 提交于
Revert "Fix an important performance regression for LSTM and GRU in t…
-
由 Goldie Gadde 提交于
This reverts commit 4f39bd9c.
-
由 Goldie Gadde 提交于
Fix an important performance regression for LSTM and GRU in tf 2.0
-
由 Scott Zhu 提交于
The issue was caused by auto inline the tf function in eager context, which cause the grappler not able to do the swap the optimization. PiperOrigin-RevId: 251945251
-
由 Goldie Gadde 提交于
[XLA] Seed each convolution with the same rng state, so that the conv…
-
- 06 6月, 2019 13 次提交
-
-
由 Tim Shen 提交于
[XLA] Seed each convolution with the same rng state, so that the conv autotuning input is consistent even when run individually. PiperOrigin-RevId: 251332367
-
由 Goldie Gadde 提交于
Fix the missing numpy import
-
由 Goldie Gadde 提交于
-
由 Katherine Wu 提交于
Additional change: - Sequential models are now revived as Sequential. PiperOrigin-RevId: 251723025
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 251579891
-
由 Shanqing Cai 提交于
- Details of the breakage: - The "keras_version" attr of a saved HDF5 (.h5) file of tf.keras model started to have quotes around it today. For instance, it ought to be 2.2.4-tf, but instead becomes "2.2.4-tf" (with the quotes) - The root cause CL appears to be CL/251386039 - This was discovered during TensorFlow.js nightly benchmark - This CL fixes the breakage and adds a unit test to prevent regression. PiperOrigin-RevId: 251544402
-
由 Anna R 提交于
PiperOrigin-RevId: 251374355
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 251323411
-
由 Goldie Gadde 提交于
Marks Keras set_session as compat.v1 only. Also moves some renames to…
-
由 Goldie Gadde 提交于
Make default Keras ConfigProto use tf.config
-
由 A. Unique TensorFlower 提交于
Marks Keras set_session as compat.v1 only. Also moves some renames to the manual renames that had been incorrectly placed in the auto-generated symbol mappings. PiperOrigin-RevId: 251708447
-
由 Gaurav Jain 提交于
PiperOrigin-RevId: 251659257
-
由 Goldie Gadde 提交于
changes.
-
- 05 6月, 2019 8 次提交
-
-
由 Yifei Feng 提交于
Cleaning up this file, so that only kathy's changes are in.
-
由 Goldie Gadde 提交于
back. Cleaning up this file, so that only kathy's changes are in.
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 251441123
-
由 Goldie Gadde 提交于
-
由 Goldie Gadde 提交于
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 251343281
-
由 Gaurav Jain 提交于
PiperOrigin-RevId: 251391606
-
由 Katherine Wu 提交于
To save and revive a model: 1. Save the model using tf.saved_model.save 2. call load_from_save_model_v2 This restores various metadata about Keras models and layers, as well as their call and loss functions. Changes to object serialization: - Adds private fields for tracking object's identifier and metadata. - Added _list_extra_dependencies_for_serialization, which allows objects to save extra dependencies when serialized to SavedModel. - Object graph view maintains a serialization cache object that is passed to each object when serializing functions/extra dependencies. PiperOrigin-RevId: 251386039
-
- 04 6月, 2019 14 次提交
-
-
由 Gunhan Gulsoy 提交于
R2.0 fastforward branch.
-
由 Goldie Gadde 提交于
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 251289411
-
由 Andiry Xu 提交于
Replace unknown shape with empty shape causes incompatiblity between inferred shape and actual (annotated) shape. Also factor out UpdatePlaceholderShape for readability. PiperOrigin-RevId: 251288300
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 251284743
-
由 Francois Chollet 提交于
subclassed models inside graph networks. PiperOrigin-RevId: 251275239
-
由 Penporn Koanantakool 提交于
PiperOrigin-RevId: 251274185
-
由 Peter Hawkins 提交于
[XLA] Change GenericTransferManager::TransferLiteralFromDevice to enqueue memcpys on a stream, rather than synchronously performing transfers one-by-one. On GPU in particular it is preferable to avoid repeated host/device synchronization when transferring large tuples. PiperOrigin-RevId: 251273699
-
由 Frank Chen 提交于
PiperOrigin-RevId: 251270126
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 251268848
-
由 Benjamin Kramer 提交于
We emit batch dots by slicing along the major-most dimension, this doesn't work if the batch dimensions are minor. Forcing row-major is slightly too strict because the order of the non-batch dimensions could still be transposed. We can optimize that later if it turns out to be a problem. PiperOrigin-RevId: 251268343
-
由 Jiri Simsa 提交于
[tf.data] Modify the rebatching logic to round up the result of dividing the original batch size with the number of workers. PiperOrigin-RevId: 251267307
-
由 Edward Loper 提交于
PiperOrigin-RevId: 251265677
-
由 Thomas O'Malley 提交于
PiperOrigin-RevId: 251265634
-