- 13 12月, 2018 40 次提交
-
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
-
由 Trevor Morris 提交于
Fix typo Refactor. Add Ok unit tests Improve unit tests, comments.
-
由 Kay Zhu 提交于
for the backward pass. Note out of boundary here means outside of (-1, image_size) index, instead of (0, image_size -1). As a result the images will be padded with 0 before gathering / scattering operation is performed, then sliced back to obtain the actual results. PiperOrigin-RevId: 225278400
-
由 Rick Chao 提交于
Export tf.train.* session_run_hook.py classes to tf.estimator.* (exporting to both v1 and v2). Keep the existing only in v1. PiperOrigin-RevId: 225276892
-
由 Anna R 提交于
PiperOrigin-RevId: 225276483
-
由 A. Unique TensorFlower 提交于
Allows Keras optimizer_v2's to be specified via string names in tf 1.x (And moves optimizer checks in eager to after the optimizer is deserialized) PiperOrigin-RevId: 225276345
-
由 Scott Zhu 提交于
Test case that can only run in v1 has bug attached. PiperOrigin-RevId: 225271476
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 225269741
-
由 Sanjoy Das 提交于
PiperOrigin-RevId: 225269293
-
由 Lukasz Kaiser 提交于
Similar to cl/198786266 specify the `maximum_iterations` to tf.while_loop in tf.foldl and tf.foldr to be compatible with XLA. PiperOrigin-RevId: 225268779
-
由 Russell Power 提交于
PiperOrigin-RevId: 225265200
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 225264988
-
由 Priya Gupta 提交于
Eager function: Do not create a set of input ops each time. This can take a very long time for big models. For e.g. when building a function for ResNet50, this increased the time to create the eager function by 72 times. PiperOrigin-RevId: 225262498
-
由 Sourabh Bajaj 提交于
Move reduce non distributed values and share the code with TPU Strategy and also improve print output of TPUMirroredVariable. PiperOrigin-RevId: 225259008
-
由 A. Unique TensorFlower 提交于
the FuncGraph.as_default scope instead of __init__. Fixes issues with the grobal Keras FuncGraph keeping state between tests. PiperOrigin-RevId: 225257506
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225257343
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225256432
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225256193
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225255718
-
由 Allen Lavoie 提交于
Adds some explanation of this in the docstring and some better exceptions. Having it non-experimental would be pretty confusing, since most users would try it without enable_eager_execution() and run into strange errors which we don't plan to fix. PiperOrigin-RevId: 225254705
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 225253270
-
由 Jian Li 提交于
PiperOrigin-RevId: 225249344
-
由 Allen Lavoie 提交于
Copies and pastes the existing Optimizer checkpointing code, and stops adding unconditional dependencies on slot variables (which were based on ops.uid() and so not reproducible across program runs). PiperOrigin-RevId: 225248820
-
由 Katherine Wu 提交于
PiperOrigin-RevId: 225245412
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 225237733
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225236744
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 225231668
-
由 A. Unique TensorFlower 提交于
custom op, it is up to the tf-lite user to provide the implementation. Best to assume it exists so the user can implement. PiperOrigin-RevId: 225228337
-
由 Dan Moldovan 提交于
Reduce the cost of serializing ConversionOptions to code, by using a more efficient inspect.util.getqualifiedname, reducing its max_depth and falling back to caching the value in the namespace. The latter step makes it more difficult to run the generated code afterwards, but it should in turn speed up the conversion process. This also adds an extra check to tf_decorator to improve robustness. PiperOrigin-RevId: 225226256
-
由 A. Unique TensorFlower 提交于
These tests share the same assertion: that weighting a particular class's loss over other classes (by passing in `sample_weight` into `model.fit`) leads to a lower evaluation loss when evaluating test data limited to that class compared to evaluating all test data. My theory is that the models in these tests are not trained enough for that assumption to always hold true, which is why they are flaky. Increased the weight from 2 to 10 and the training epochs from 5 to 10. PiperOrigin-RevId: 225218063
-
由 Zhenyu Tan 提交于
PiperOrigin-RevId: 225217785
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 225212001
-
由 Gaurav Jain 提交于
PiperOrigin-RevId: 225210711
-
由 Artem Belevich 提交于
PiperOrigin-RevId: 225208397
-
由 Peter Hawkins 提交于
PiperOrigin-RevId: 225205868
-