- 11 12月, 2018 40 次提交
-
-
由 hyunyoung 提交于
-
由 A. Unique TensorFlower 提交于
Add workaround for the latest toolchain repository not supporting older bazel versions; only load it conditionally. PiperOrigin-RevId: 224965872
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224956744
-
由 Jing Li 提交于
Rewrite Adam and LazyAdam optimizer to take global step for computing beta1 and beta2 accumulators, instead of having the optimizer instance to keep its own independent beta1 and beta2 accumulators as non-slot variables. PiperOrigin-RevId: 224948020
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224937131
-
由 Taylor Robie 提交于
PiperOrigin-RevId: 224936924
-
由 Tong Shen 提交于
PiperOrigin-RevId: 224933587
-
由 A. Unique TensorFlower 提交于
in the specific case of: * Eager execution enabled * Inside a FuncGraph, inside a graph * In a replica context (such as in a call to `tf.distribute.Strategy.call_for_each_replica()`). PiperOrigin-RevId: 224930182
-
由 Justin Lebar 提交于
Previously we only had a flag for disabling specific passes. But being able to disable all passes is helpful if you have some already-optimized HLO that you just want to run. PiperOrigin-RevId: 224928095
-
由 Justin Lebar 提交于
When you pass XLA_FLAGS to replay_computation, you very likely want that only to apply to the actual computation(s) being run, not to the XLA computations that replay_computation synthesizes to generate fake data for the "real" ones' arguments. PiperOrigin-RevId: 224927003
-
由 Justin Lebar 提交于
This gets a DebugOptions struct with all the defaults filled in as though XLA_FLAGS were empty. This is useful when you want to run an XLA computation and explicitly ignore any XLA_FLAGS passed to the binary. PiperOrigin-RevId: 224925335
-
由 Dan Moldovan 提交于
Expose a `tensorflow.autograph` namespace, with a minimal core API under experimental. Clean the documentation for public symbols. PiperOrigin-RevId: 224921147
-
由 Francois Chollet 提交于
Enable mixing value tensors (eager tensors or numpy arrays) and Keras symbolic tensors when building Keras graphs-of-layers in an eager scope. In these cases, the value tensors are treated as symbolic constants. This enables the following pattern to work in the same way in both V1 and V2: ``` lstm = LSTM(2) inputs = keras.Input((None, 3)) outputs = lstm(inputs, initial_state=tf.ones(shape)) ``` (without this change, the above code works in V1 but fails in V2 with an artificial exception). Known issue: in case a random tensor is used, there is a (usually harmless) behavior discrepancy remaining between V1 and V2, which is that in V2 we'd be using the same random value every time, whereas in V1 we'd be drawing new random values (since the tensor would be treated as a random op and not as a constant). We think this is not a problem because in V2 users should have the mental model "tensors are values" and thus would be expecting a random tensor to behave like a constant value and not like a random generator. PiperOrigin-RevId: 224915621
-
由 A. Unique TensorFlower 提交于
This bypasses a nullptr error that appears downstream due to referencing a length-zero array in a temporary buffer. PiperOrigin-RevId: 224915050
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 224914276
-
由 Katherine Wu 提交于
PiperOrigin-RevId: 224913339
-
由 Russell Power 提交于
This disables the keepalive watchdog for TF/GRPC channels. The watchdog ping timer is intended to monitor channels in case they have gone "stale". If this occurs, any pending RPCs are marked failed. This interacts poorly with large TF models, where we can saturate the network exchanging tensors, causing the watchdog ping to be delayed. The timer is not essential (normal deadline processing and socket termination is still respected), so we can disable it with minimal risk here. PiperOrigin-RevId: 224913045
-
由 Gunhan Gulsoy 提交于
it is a bash test, and language filters do not work properly on windows. PiperOrigin-RevId: 224912071
-
由 Kay Zhu 提交于
PiperOrigin-RevId: 224905468
-
由 Scott Zhu 提交于
Also stop exporting CuDNNLSTM since its all covered by unified LSTM. PiperOrigin-RevId: 224900214
-
由 Sanjoy Das 提交于
Returning void is more common than returning Status so pick the longer name for the less common variant. PiperOrigin-RevId: 224897169
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224897071
-
由 Alexandre Passos 提交于
PiperOrigin-RevId: 224894987
-
由 Russell Power 提交于
PiperOrigin-RevId: 224894043
-
由 Alexandre Passos 提交于
PiperOrigin-RevId: 224893836
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224891138
-
由 A. Unique TensorFlower 提交于
`keras.backend.function`. PiperOrigin-RevId: 224886577
-
由 Frank Chen 提交于
PiperOrigin-RevId: 224877586
-
由 Dan Moldovan 提交于
Allow completely stateless(i.e., with no outputs) loops. Simplify the handling of stateless conditionals. This change will still not support stateless loops pre-v2 until we add auto deps. However, it works properly in tf.function. PiperOrigin-RevId: 224876064
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224875931
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 224874845
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 224872228
-
由 Anna R 提交于
PiperOrigin-RevId: 224870669
-
由 Eugene Brevdo 提交于
This fixes a bug where Adam beta*_power variables were always created as RefVars even if the optimizer acts on ResourceVars. This broke certain defun + Adam use cases. Also fixed the unit tests, which *always* created ResourceVariables (ever since variables.Variable() constructor became aliased to ResourceVariables). PiperOrigin-RevId: 224869338
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 224869113
-
由 Shivani Agrawal 提交于
PiperOrigin-RevId: 224865488
-
由 James Ring 提交于
PiperOrigin-RevId: 224863771
-
由 Scott Zhu 提交于
I believe for historical reason, the activation function for LSTM is hard_sigmoid because it is faster compare to sigmoid. With the new LSTM, the performance issue should be fixed with grappler swapping the backend. PiperOrigin-RevId: 224863406
-
由 Skye Wanderman-Milne 提交于
The new toggle replaces ENABLE_COND_V2, ENABLE_WHILE_V2, and ENABLE_TENSOR_ARRAY_V2. This means that these can't be toggled independently anymore, notably that v1 TensorArrays can only be run with v1 loops, and v2 TensorArrays with v2 loops. This also introduces a corresponding environment variable TF_ENABLE_CONTROL_FLOW_V2. I kept the old env vars as well in case people are using them. They all flip the new single toggle now. In addition, this change removes some while_v2 code for dealing with v1 TensorArrays, since this is no longer a supported configuration. PiperOrigin-RevId: 224862245
-
由 Katherine Wu 提交于
Add attribute to Keras model which generates an exportable tf.function. SaveModel save now looks for this attribute when searching for a function to export. PiperOrigin-RevId: 224861089
-