- 13 12月, 2018 34 次提交
-
-
由 Scott Zhu 提交于
Test case that can only run in v1 has bug attached. PiperOrigin-RevId: 225271476
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 225269741
-
由 Sanjoy Das 提交于
PiperOrigin-RevId: 225269293
-
由 Lukasz Kaiser 提交于
Similar to cl/198786266 specify the `maximum_iterations` to tf.while_loop in tf.foldl and tf.foldr to be compatible with XLA. PiperOrigin-RevId: 225268779
-
由 Russell Power 提交于
PiperOrigin-RevId: 225265200
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 225264988
-
由 Priya Gupta 提交于
Eager function: Do not create a set of input ops each time. This can take a very long time for big models. For e.g. when building a function for ResNet50, this increased the time to create the eager function by 72 times. PiperOrigin-RevId: 225262498
-
由 Sourabh Bajaj 提交于
Move reduce non distributed values and share the code with TPU Strategy and also improve print output of TPUMirroredVariable. PiperOrigin-RevId: 225259008
-
由 A. Unique TensorFlower 提交于
the FuncGraph.as_default scope instead of __init__. Fixes issues with the grobal Keras FuncGraph keeping state between tests. PiperOrigin-RevId: 225257506
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225257343
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225256432
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225256193
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225255718
-
由 Allen Lavoie 提交于
Adds some explanation of this in the docstring and some better exceptions. Having it non-experimental would be pretty confusing, since most users would try it without enable_eager_execution() and run into strange errors which we don't plan to fix. PiperOrigin-RevId: 225254705
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 225253270
-
由 Jian Li 提交于
PiperOrigin-RevId: 225249344
-
由 Allen Lavoie 提交于
Copies and pastes the existing Optimizer checkpointing code, and stops adding unconditional dependencies on slot variables (which were based on ops.uid() and so not reproducible across program runs). PiperOrigin-RevId: 225248820
-
由 Katherine Wu 提交于
PiperOrigin-RevId: 225245412
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 225237733
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225236744
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 225231668
-
由 A. Unique TensorFlower 提交于
custom op, it is up to the tf-lite user to provide the implementation. Best to assume it exists so the user can implement. PiperOrigin-RevId: 225228337
-
由 Dan Moldovan 提交于
Reduce the cost of serializing ConversionOptions to code, by using a more efficient inspect.util.getqualifiedname, reducing its max_depth and falling back to caching the value in the namespace. The latter step makes it more difficult to run the generated code afterwards, but it should in turn speed up the conversion process. This also adds an extra check to tf_decorator to improve robustness. PiperOrigin-RevId: 225226256
-
由 A. Unique TensorFlower 提交于
These tests share the same assertion: that weighting a particular class's loss over other classes (by passing in `sample_weight` into `model.fit`) leads to a lower evaluation loss when evaluating test data limited to that class compared to evaluating all test data. My theory is that the models in these tests are not trained enough for that assumption to always hold true, which is why they are flaky. Increased the weight from 2 to 10 and the training epochs from 5 to 10. PiperOrigin-RevId: 225218063
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 225217785
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225212001
-
由 Gaurav Jain 提交于
PiperOrigin-RevId: 225210711
-
由 Artem Belevich 提交于
PiperOrigin-RevId: 225208397
-
由 Peter Hawkins 提交于
PiperOrigin-RevId: 225205868
-
由 A. Unique TensorFlower 提交于
1. Only MIN_COMBINED mode is supported; 2. Reshape the output to [d0,..., dn * unpack_size] if input shape is [d0, ..., dn]. 3. Only uint32 is supported for the input; 4. Output data type is bfloat16; 5. Only uint8 or uint16 is supported for the original unpacked input. PiperOrigin-RevId: 225203930
-
由 Skye Wanderman-Milne 提交于
PiperOrigin-RevId: 225202451
-
由 Scott Zhu 提交于
This contrain was originally added due to the different weights format issue between canonical and cudnn (extra input bias). Now since the input bias is feeded as zeros in cudnn mode, and weights are unified into one format. Having bias regularizer should not be a issue. PiperOrigin-RevId: 225193782
-
由 Mihai Maruseac 提交于
PiperOrigin-RevId: 225189182
-
由 Skye Wanderman-Milne 提交于
Removing the LoopCond of a while_loop can cause the partitioner to fail with: A cross-device loop must have a pivot predicate For some reason this only triggers with while_v2 (the lowered while loop is slightly different than what would be produced by the original while_loop). PiperOrigin-RevId: 225188075
-
- 12 12月, 2018 6 次提交
-
-
由 Gaurav Jain 提交于
In addition, fix a few eval() calls as well as remove some @test_util.run_v1_only annotations. PiperOrigin-RevId: 225180248
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225178266
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225140840
-
由 A. Unique TensorFlower 提交于
Format in the previous state didn't give the timezone. PiperOrigin-RevId: 225138116
-
由 A. Unique TensorFlower 提交于
Remove :android_tensorflow_lib_selective_registration* aliases, targets using selective registration can now use the :android_tensorflow_lib_lite* targets. PiperOrigin-RevId: 225134497
-
由 Skye Wanderman-Milne 提交于
PiperOrigin-RevId: 225131361
-