- 26 2月, 2019 21 次提交
-
-
由 Stephan Lee 提交于
The Trace API allows user to trace execution and collect graph or profile information. PiperOrigin-RevId: 235563123
-
由 Gaurav Jain 提交于
This avoids the need for the user to re-specify the output path. In addition, we make the inplace arg a bit more friendly by making it a boolean toggle vs requiring the user to write "True" as its value. PiperOrigin-RevId: 235562748
-
由 Peter Hawkins 提交于
The XLA Python extension is packaged separately as "jaxlib", but XLA itself is part of TensorFlow. Some of the same basic protocol buffers are used by both (e.g., xla_data.proto), leading to a conflict if a proto is imported twice into the same Python interpreter via different routes (e.g., https://github.com/google/jax/issues/349), since a single global C++ protocol buffer registry exists for the entire interpreter. The simplest solution, short of a significant refactoring of the TensorFlow->XLA's dependency structure, seems to be to change xla_client.py not to depend on any XLA protocol buffers. A few other possible alternatives are discussed in https://github.com/google/jax/issues/349. Fortunately, we don't use protocol buffers in any essential ways in the XLA client, mostly for objects such as convolution dimension numbers. Instead, create Python objects that play the same role and that duck type as protocol buffers well enough to keep the SWIG bindings happy. Remove a couple an unused function OpMetadataToProto. Change Computation.GetProto() to Computation.GetSerializedProto(). In passing, remove duplicated comment between xla_data.i and local_computation_builder.i. PiperOrigin-RevId: 235560841
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 235559353
-
由 Akshay Modi 提交于
This is not the best heuristic (though not necessarily worse than the previous one - which just selected the first input), but attempts to make function placement agnostic to capture ordering. PiperOrigin-RevId: 235556736
-
由 Stephan Lee 提交于
PiperOrigin-RevId: 235552900
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235552811
-
由 Nupur Garg 提交于
PiperOrigin-RevId: 235552415
-
由 Benjamin Kramer 提交于
This is as simple as correcting the RHS index so it doesn't lose the batch dimensions. Fused elemental dots only occur for small operands on CPU. PiperOrigin-RevId: 235549797
-
由 Michael Banfield 提交于
PiperOrigin-RevId: 235549420
-
由 Benjamin Kramer 提交于
There's no good reason for disabling it completely. Right now this falls back to the slow generic implementation. We could make it use Eigen if we want to. PiperOrigin-RevId: 235548224
-
由 Taylor Robie 提交于
When creating EagerTensors under a graph scope, they will be captured by the graph and treated as constants. PiperOrigin-RevId: 235547535
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 235546699
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 235546488
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 235545189
-
由 Nupur Garg 提交于
PiperOrigin-RevId: 235544520
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 235542344
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235541426
-
由 Tom Hennigan 提交于
After attempting to integrate `tf.Module` into existing codebases (e.g. `tf.keras`) we've found that the automatic name scoping is too invasive (e.g. changing op and variable names) and it is desirable to disable it ~everywhere. We propose that name scoping for `tf.Module` becomes opt-in: >>> class MyModule(tf.Module): ... ... @tf.Module.with_name_scope ... def auto_name_scope(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(1., name='w') ... return x * self.w ... ... def manual_name_scope(self, x): ... if not hasattr(self, 'w'): ... with self.name_scope: ... self.w = tf.Variable(1., name='w') ... return x * self.w ... ... def no_name_scope(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(1., name='w') ... return x * self.w We will move opt-out name scoping into Sonnet: >>> class MyModule(snt.Module): ... ... def auto_name_scope(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(1., name='w') ... return x * self.w ... ... @snt.no_name_scope ... def no_name_scope(self, x): ... if not hasattr(self, 'w'): ... self.w = tf.Variable(1., name='w') ... return x * self.w In TF2 name scopes are cosmetic and this should be less of a big deal. We might consider encouraging users who want to filter on names to instead use flatten to extract a state dictionary for their objects (c.f. https://github.com/tensorflow/community/pull/56#discussion_r255048762). I have moved the automatic name scoping logic (Metaclass etc) and associated tests into Sonnet 2. PiperOrigin-RevId: 235540184
-
由 Benjamin Kramer 提交于
It's not clear what this comment refers to, but it's from 2017 and the test just works. PiperOrigin-RevId: 235539638
-
由 Tom Hennigan 提交于
PiperOrigin-RevId: 235536186
-
- 25 2月, 2019 16 次提交
-
-
由 Andr? Susano Pinto 提交于
integration tests. PiperOrigin-RevId: 235526039
-
由 Chris Jones 提交于
PiperOrigin-RevId: 235519570
-
由 Tom Hennigan 提交于
PiperOrigin-RevId: 235518682
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235511159
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235507181
-
由 Benjamin Kramer 提交于
This was fixed in 0bd78003 PiperOrigin-RevId: 235501348
-
由 Benjamin Kramer 提交于
This is a stub implementation that always returns zero as replication is not supported on CPU or GPU currently. PiperOrigin-RevId: 235497567
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235488332
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235485786
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235484830
-
由 Adrian Kuegel 提交于
This probably didn't matter before, but now Sort has a comparison computation. Also add an optimization to not use a convert to BF16 if the user is a convert to F32. PiperOrigin-RevId: 235481597
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235476472
-
由 Sanjoy Das 提交于
PiperOrigin-RevId: 235440605
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235438039
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235435925
-
由 Guangda Lai 提交于
PiperOrigin-RevId: 235418348
-
- 24 2月, 2019 3 次提交
-
-
由 Dan Moldovan 提交于
PiperOrigin-RevId: 235411715
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 235409829
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 235395188
-