- 14 9月, 2021 2 次提交
-
-
由 Edward Loper 提交于
Update Keras layers to use `tf.xyz` rather than `tf.raw_ops.xyz`. (E.g., use tf.matmul rather than tf.raw_ops.MatMul.) ExtensionTypes that want to support these layers can do so by adding dispatch. (E.g., tf.matmul supports dispatch; but tf.raw_ops.MatMul does not.) PiperOrigin-RevId: 396441290
-
由 A. Unique TensorFlower 提交于
Whether perform profiling should be at user's discretion. PiperOrigin-RevId: 396394295
-
- 11 9月, 2021 1 次提交
-
-
由 Monica Song 提交于
Enable the Functional class' from_config method to be used for subclassed models with Functional constructors. PiperOrigin-RevId: 396047985
-
- 10 9月, 2021 5 次提交
-
-
由 Matt Watson 提交于
When either layer receives an image that is smaller than the crop box, we will take the largest box inside the input image of the target aspect ratio, and resize it to fit. This is already the behavior of tge RandomCrop layer at inference time. PiperOrigin-RevId: 395828060
-
由 Edward Loper 提交于
Update Keras layers to use `tf.xyz` rather than `tf.raw_ops.xyz`. (E.g., use tf.matmul rather than tf.raw_ops.MatMul.) ExtensionTypes that want to support these layers can do so by adding dispatch. (E.g., tf.matmul supports dispatch; but tf.raw_ops.MatMul does not.) PiperOrigin-RevId: 395815316
-
由 Edward Loper 提交于
* When constructing a new KerasTensor: fail immediately if the wrapped TypeSpec has no shape, or if the shape is not a TensorShape. (Exception: NoneTensorSpec is allowed, even though it has no shape.) * When accessing KerasTensor.dtype, fail with a useful error message if the wrapped TypeSpec has no dtype field, or if the dtype field is not a DType. * For TypeSpecs other than DenseSpec/RaggedTensorSpec/SparseTensorSpec, the `set_shape` method requires and uses a `TensorSpec.with_shape()` method that returns a copy of a spec with a new shape. (Raise a useful error message if it's not there.) PiperOrigin-RevId: 395783105
-
由 Edward Loper 提交于
When converting ExtensionType inputs to match the expected dtype, only use tf.cast if the value doesn't already have the expected dtype. (Not all ExtensionTypes add dispatch handlers for the tf.cast method, so we should avoid calling it unless it's necessary.) PiperOrigin-RevId: 395780095
-
由 Edward Loper 提交于
Update Keras layers to use `tf.xyz` rather than `tf.raw_ops.xyz`. (E.g., use tf.matmul rather than tf.raw_ops.MatMul.) ExtensionTypes that want to support these layers can do so by adding dispatch. (E.g., tf.matmul supports dispatch; but tf.raw_ops.MatMul does not.) PiperOrigin-RevId: 395747883
-
- 09 9月, 2021 3 次提交
-
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 395526123
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 395505151
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 395504462
-
- 08 9月, 2021 3 次提交
-
-
由 Rishit Dagli 提交于
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 395319738
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 395272456
-
- 07 9月, 2021 3 次提交
-
-
由 Rishit Dagli 提交于
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 395109473
-
由 Krish Rustagi 提交于
-
- 06 9月, 2021 3 次提交
-
-
由 A. Unique TensorFlower 提交于
The problem is that for a dataset with e.g. 14 elements and `steps_per_exectuion=5`, the `DataAdapter.steps` iterator does the following: 1. Yield `0`, 2. Yield `5`, 3. Set `steps_per_execution` to `4`, yield `10`, 4. Set `steps_per_execution` back to `5`. The problem is that in distributed training, the steps are only enqueued, and not executed. So even if the value of `steps_per_execution` is adjusted to `4` for the final step, and has a value of `4` when the task is enqueued, `steps_per_execution` is set back to `5` before the task is actually run. As a result, 15 steps are computed instead of 14. This change makes the number of steps a parameter of the internal `train_function`, `predict_function`, and `test_function` functions, and passes a copy of the value of `steps_per_execution` at the time the task is enqueued, e.g. between steps 3 and 4 above. PiperOrigin-RevId: 395042946
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 395027075
-
由 Scott Zhu 提交于
Fix various failing tests in v1. Most of them are failing because of the slight different behavior between v1 and v2. Some of them are only targeting to work with v2 behavior. PiperOrigin-RevId: 395026999
-
- 04 9月, 2021 6 次提交
-
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 394803392
-
由 Scott Zhu 提交于
In v1, since there isn't a global policy, the layer compute_dtype will be "_inferred" from input, and the inferred dtype are actually populate on the cell. PiperOrigin-RevId: 394779149
-
由 Scott Zhu 提交于
Most of them are failing since the actual code are expected to run only in v2 (eg need eager/resource variable, or certain fix we added is only applied to the v2 code path). PiperOrigin-RevId: 394765626
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 394765449
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 394747046
-
由 Paul Chiang 提交于
PiperOrigin-RevId: 394724555
-
- 03 9月, 2021 6 次提交
-
-
由 Mikael Toresen 提交于
-
由 Mikael Toresen 提交于
-
由 Rick Chao 提交于
Add skipTest for a couple of methods in OSS. PiperOrigin-RevId: 394625024
-
由 A. Unique TensorFlower 提交于
Previously, it was possible for custom objects to leak outside of the custom_object_scope() context manager when used in a multi-threaded manner. PiperOrigin-RevId: 394586210
-
由 Katherine Wu 提交于
PiperOrigin-RevId: 394567286
-
由 Scott Zhu 提交于
The test/benchmark could break if run the keras code against TF v1 behavior. PiperOrigin-RevId: 394564902
-
- 02 9月, 2021 2 次提交
-
-
由 DachuanZhao 提交于
-
由 DLPerf 提交于
-
- 01 9月, 2021 3 次提交
-
-
由 Mikael Toresen 提交于
-
由 Scott Zhu 提交于
There is no point to redefine a const list in the loop PiperOrigin-RevId: 394103610
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 394032940
-
- 31 8月, 2021 3 次提交
-
-
由 DLPerf 提交于
-
由 Reed Wanderman-Milne 提交于
Also change some references to the experimental "set_policy" function to the nonexperimental "set_global_policy" function. PiperOrigin-RevId: 393891991
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 393882385
-