- 10 8月, 2017 27 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164804532
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164804406
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164803218
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164802741
-
由 A. Unique TensorFlower 提交于
Update Android Detect demo to use models exported using the Tensorflow Object Detection API. Resolves #6738. PiperOrigin-RevId: 164802542
-
由 William Chargin 提交于
This changes the `samples_per_second` parameter of the `encode_audio` and `decode_audio` ops from an `Attr` to an `Input`, so that it can be given arbitrary tensor values instead of only constants. This change is important for use cases that want to use a single graph to encode audio clips at arbitrary sample rates. (In particular, we want to create a Python function that uses a long-running TensorFlow session to encode audio; the sample rate cannot be known ahead of time, and we don't want to have to reconstruct the graph on every call.) PiperOrigin-RevId: 164799067
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164797105
-
由 A. Unique TensorFlower 提交于
slightly different semantics. PiperOrigin-RevId: 164796436
-
由 Brennan Saeta 提交于
PiperOrigin-RevId: 164794573
-
由 Sukriti Ramesh 提交于
PiperOrigin-RevId: 164791375
-
由 Francois Chollet 提交于
Refactor Keras layers to rely on the core constraint implementation. PiperOrigin-RevId: 164788653
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164787644
-
由 Yangzihao Wang 提交于
PiperOrigin-RevId: 164786167
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164782851
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164782742
-
由 Derek Murray 提交于
This transformation is a simpler (and potentially more efficient) replacement for `Dataset.map(lambda x: x, num_threads=1, output_buffer_size=N)`, avoiding the overhead of function invocation and simplifying the synchronization slightly. PiperOrigin-RevId: 164781954
-
由 Kay Zhu 提交于
[XLA] Fix Broadcast implementation in HloEvaluator to handle the special case of scalar broadcast to be consistent with other backends. Also add a test for scalar broadcast. PiperOrigin-RevId: 164781786
-
由 Mark Heffernan 提交于
Updating is possible if operands/uses or computation roots change in the graph. Updating is not possible if instructions are deleted or if new instructions are added. Specific changes: * Add verification methods for asserting invariants and checking the analysis after updating. * Always add phi values at while instructions. Previously these were added only if the phi had different inputs. The advantage of using phi's unconditionally is that the set of values is fixed for a module. Updates due to changing operands/uses in the graph do not create new values. * Store values in a vector rather than a map. With unconditional phi values, the number of HloValues is fixed so the values can be held in a vector with stable references to elements. PiperOrigin-RevId: 164778750
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164777455
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164775849
-
由 Yao Zhang 提交于
PiperOrigin-RevId: 164771538
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164762982
-
由 Brennan Saeta 提交于
A common failure mode of the new datasets input pipeline is an extremely long first sess.run call. It can sometimes appear to users that things are simply hanging, when instead a large shuffle buffer is being filled. When filling large shuffle buffers, we should let users know what's going on. PiperOrigin-RevId: 164760903
-
由 A. Unique TensorFlower 提交于
Don't run contrib/timeseries/python/timeseries:state_management_test pip test until crash has been resolved. PiperOrigin-RevId: 164759761
-
由 Benoit Steiner 提交于
PiperOrigin-RevId: 164739939
-
由 David Soergel 提交于
PiperOrigin-RevId: 164739283
-
由 Eric Liu 提交于
Also make version name alpha instead of RC. PiperOrigin-RevId: 164735457
-
- 09 8月, 2017 13 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164728247
-
由 HyoukJoong Lee 提交于
removable from a computation. This is to prevent DCE from removing a while instruction that includes a send/recv instruction. PiperOrigin-RevId: 164722478
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 164718342
-
由 James Qin 提交于
PiperOrigin-RevId: 164686075
-
由 RJ Ryan 提交于
Prevents bad formatting: https://www.tensorflow.org/versions/r1.2/api_docs/python/tf/nn/dynamic_rnn PiperOrigin-RevId: 164675585
-
由 Benoit Steiner 提交于
properly on some platforms. PiperOrigin-RevId: 164665656
-
由 Yuefeng Zhou 提交于
PiperOrigin-RevId: 164660701
-
由 Alexandre Passos 提交于
PiperOrigin-RevId: 164659904
-
由 A. Unique TensorFlower 提交于
Every summary op writes data for a single plugin to process. Hence, each SummaryMetadata proto should have a single PluginData optional field (instead of a repeated one). This removes much complexity from TensorBoard logic that loops over the plugin data. It also simplifies the SQL schema - it can now enforce a one-to-one relationship between summary op and plugin. PiperOrigin-RevId: 164659570
-
由 Frank Chen 提交于
Make a change to the Cluster Resolver API: If no `credentials` are passed in to the GCE and TPU Cluster Resolvers, then we will use the GoogleCredentials.get_application_default() credentials. If users want to pass in no credentials at all, then they will have to pass in "None" explicitly. PiperOrigin-RevId: 164659129
-
由 Derek Murray 提交于
This change ensures that the mapper/predicate function used respectively in these transformations has its own ScopedStepContainer, thereby allowing the use of TensorArray resources (and operations that use them, such as control-flow ops) inside these functions. Fixes #11715. PiperOrigin-RevId: 164648309
-
由 A. Unique TensorFlower 提交于
Speed up tf.determinant by using LU factorization kernels from cuSolver for large matrices instead of the batched LU factorization from cuBlas, which is only suitable for small matrices. Speedup measured on Titan X (Maxwell): Shape Before After Speedup ------------------------------------------------------ (4, 4) 0.000159 0.000200 -26.35% (noise) (16, 16) 0.000198 0.000190 3.59% (64, 64) 0.000592 0.000538 9.10% (128, 128) 0.001348 0.001376 -2.14% (200, 200) 0.003201 0.002882 9.94% (256, 256) 0.005096 0.003373 33.81% (1024, 1024) 0.169690 0.012452 92.66% (2, 512, 512) 0.023370 0.012243 47.61% (2, 1024, 1024) 0.178757 0.025198 85.90% (4, 4, 4) 0.000121 0.000128 -5.79% (4, 16, 16) 0.000212 0.000190 9.95% (4, 64, 64) 0.000499 0.000514 -3.01% (4, 128, 128) 0.001276 0.001214 4.79% (4, 256, 256) 0.004364 0.004314 1.14% (4, 512, 512) 0.025031 0.024956 0.30% (4, 1024, 1024) 0.184210 0.052858 71.31% (8, 512, 512) 0.026542 0.026502 0.15% (8, 1024, 1024) 0.186145 0.185988 0.08% (65, 4, 4) 0.000152 0.000142 6.05% (65, 16, 16) 0.000197 0.000194 1.52% (65, 64, 64) 0.000559 0.000549 1.79% (65, 128, 128) 0.001326 0.001308 1.29% (65, 256, 256) 0.005495 0.005525 -0.55% (65, 512, 512) 0.034147 0.034662 -1.51% (513, 4, 4) 0.000144 0.000195 -35.42% (noise) (513, 16, 16) 0.000207 0.000200 3.38% (513, 64, 64) 0.001502 0.001490 0.79% (513, 256, 256) 0.033428 0.032933 1.48% (513, 512, 512) 0.234707 0.216858 7.60% PiperOrigin-RevId: 164633730
-
由 A. Unique TensorFlower 提交于
Speed up GPU version of tf.matrix_inverse by using LU factorization kernels from cuSolver and a hand-written matrix identity kernel, instead of the batched LU factorization from cuBlas, which is only suitable for small matrices. Speedup measured on Titan X (Maxwell): Shape adjoint Before After Speedup ------------------------------------------------------ (4, 4) noadjoint 0.000204 0.000193 5.3% (16, 16) noadjoint 0.000360 0.000186 48.3% (256, 256) noadjoint 0.013830 0.003852 72.1% (1024, 1024) noadjoint 0.647639 0.015075 97.6% (513, 4, 4) noadjoint 0.000219 0.000192 12.3% (513, 16, 16) noadjoint 0.000293 0.000195 33.4% (513, 256, 256) noadjoint 0.120573 0.120175 0.3% (4, 4) adjoint 0.000201 0.000193 3.9% (16, 16) adjoint 0.000282 0.000185 34.4% (256, 256) adjoint 0.013028 0.003391 73.9% (1024, 1024) adjoint 0.647752 0.014341 97.7% (513, 4, 4) adjoint 0.000221 0.000197 10.8% (513, 16, 16) adjoint 0.000384 0.000205 46.6% (513, 256, 256) adjoint 0.131402 0.130616 0.6% PiperOrigin-RevId: 164623298
-