- 02 5月, 2020 40 次提交
-
-
由 Rishit Dagli 提交于
Added Coursera course Machine Learning with TensorFlow on GCP
-
由 A. Unique TensorFlower 提交于
Make sure custom losses + metrics get autographed, even though autograph marks code in core Tensorflow with `DoNotConvert` If this changes and autograph starts converting keras code, this won't be needed. (e.g. maybe after the repo split) Fixes #37440 PiperOrigin-RevId: 309518788 Change-Id: I42ce3e208e5c2fd10c4de008d9087d089ee4c591
-
由 Tomer Kaftan 提交于
Make sure custom losses + metrics get autographed, even though autograph marks code in core Tensorflow with `DoNotConvert` If this changes and autograph starts converting keras code, this won't be needed. (e.g. maybe after the repo split) Fixes #37440 PiperOrigin-RevId: 309516574 Change-Id: Ia82ee60fed225957dec0de4d378d6cac57d7f16e
-
由 Derek Murray 提交于
This change adds an `OpKernelContext::executor_type()` method, which (by analogy with `OpKernelContext::runner()` and `OpKernelContext::run_all_kernels_inline()`) enables function calls within a kernel to inherit that option from the calling context. As a concrete example, this enables a WhileOp or IfOp running with the SINGLE_THREADED_EXECUTOR executor_type to use the same optimizations in the branch/cond/body functions of those ops. PiperOrigin-RevId: 309515120 Change-Id: I11b0b3ee458dd8ea1cdc9284f8acdb293c5cb770
-
由 Derek Murray 提交于
The "DeleteIterator" op can block on work in the inter-op threadpool, e.g. waiting for function execution to terminate. As a result, we need to ensure that this blocking happens either on a separately created thread for the purpose (or a donated calling thread), which converting it to a `HybridAsyncOpKernel` achieves. PiperOrigin-RevId: 309514952 Change-Id: I24f056bfbfbf4b1df0332432cc43aa96be123d6a
-
由 Derek Murray 提交于
This fixes the tensorflow_serving OSS BUILD, which otherwise fails with the error: ``` ERROR: /tmpfs/tmp/bazel/external/org_tensorflow/tensorflow/python/keras/api/BUILD:121:1: Couldn't build file external/org_tensorflow/tensorflow/python/keras/api/_v1/__init__.py: Executing genrule @org_tensorflow//tensorflow/python/keras/api:keras_python_api_gen_compat_v1 failed (Exit 1) > > ImportError: /tmpfs/tmp/bazel/execroot/tf_serving/bazel-out/host/bin/external/org_tensorflow/tensorflow/python/keras/api/create_tensorflow.python_api_keras_python_api_gen_compat_v1.runfiles/org_tensorflow/tensorflow/python/_pywrap_quantize_training.so: undefined symbol: _ZN10tensorflow38DoQuantizeTrainingOnSerializedGraphDefERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEiS7_PS5_ ``` PiperOrigin-RevId: 309514904 Change-Id: Ib4834d8e03f9193c7dafe096c697d9e5ef45f875
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309513623 Change-Id: If25c55f4700cb13906b001e1fb03d801fb84bf29
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309513590 Change-Id: I63c86395a92176fa7165b4ae9909b50fbe319f39
-
由 Dan Moldovan 提交于
Create interfaces for distributed iterators and iterables. These interfaces are private for now. Use them to break the circular dependency with AutoGraph. PiperOrigin-RevId: 309510977 Change-Id: Id2e3190e98be6dfa1adf804411d90812a96c30c4
-
由 Dan Moldovan 提交于
PiperOrigin-RevId: 309510456 Change-Id: Ifc3683fd575072f6af2250667f6875c9c7399f6c
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309506396 Change-Id: I1652fd13a4d398236058c1806ff900576d39172b
-
由 Jiho Choi 提交于
PiperOrigin-RevId: 309505494 Change-Id: I939fb769f8338d99402592858e4d7b8e7a1aa56c
-
由 Rachel Lim 提交于
PiperOrigin-RevId: 309503752 Change-Id: Id1e4c5f24a90ec82068e8a5ed9579b5b1c84403d
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 309501684 Change-Id: I14a7482a4bf18530d0d60a468cf3b81b7a7e7593
-
由 A. Unique TensorFlower 提交于
Correctness > speed. And using log1p improves accuracy significantly, especially of gradients. *** Original change description *** Partial rollback of change to softplus functor. The large speedup observed was only true for arguments greather than ~5. For arguments less than ~5, a significant slowdown would occur. PiperOrigin-RevId: 309499644 Change-Id: Ie137dd5e2fa798f8c1e0330375e3e0b6dbcc31a6
-
由 Allen Lavoie 提交于
Fixes a test whose randomness was affected by this. PiperOrigin-RevId: 309495955 Change-Id: I4e184c9ca40b0102cf5eb895a040f9f2a09e753a
-
由 Mihai Maruseac 提交于
PiperOrigin-RevId: 309494739 Change-Id: Icae7c227f0515cefce403ab7f99176ad0b8eb4b2
-
由 A. Unique TensorFlower 提交于
Correctness > speed. And using log1p improves accuracy significantly, especially of gradients. *** Original change description *** Partial rollback of change to softplus functor. The large speedup observed was only true for arguments greather than ~5. For arguments less than ~5, a significant slowdown would occur. PiperOrigin-RevId: 309487961 Change-Id: Ibe949c578a002d25250ba519acd16597981d1a4e
-
由 Bruce Fontaine 提交于
PiperOrigin-RevId: 309485102 Change-Id: I1234400cf973644d3c6aa6e09568aea3da3f3f63
-
由 Bixia Zheng 提交于
mode. When forming segments for implicit batch mode, the inputs of the operations in a segment need to have a rank of not less than 2 and the implicit batch size of the operations need to be consistent. Previously, we do not perform such checking during segmentation. We may reject to build a TRTEngineOp for a segment that doesn't meet such a requirement. We may even crash when executing a TRTEngineOp built for a segment that doesn't meet such a requirement as shown in the bug here. Add test cases. PiperOrigin-RevId: 309484466 Change-Id: I715817e9359d41075300911ca44aacb410e63ea0
-
https://github.com/llvm/llvm-project/commit/9295f356bb30由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309482235 Change-Id: I54f4d159eaa88bed00cbf410037f025b1f2eb5ca
-
由 Derek Murray 提交于
Previously, `RunSync()` was missing the optimization from e5c6881c that enables a single-component multi-device function from being executed as a local function. When 624f1a0f switched all tf.data functions to be considered as multi-device functions, the optimization became load-bearing, because occasionally it will dispatch a function to a ProcessFunctionLibraryRuntime that does not actually support multi-device functions, because it has no rendezvous factory. The fix is simple: update the call to `GetHandleOnDevice()` in `PrepareRunSync()` to pass the optional `include_multi_device = true` argument. PiperOrigin-RevId: 309481098 Change-Id: Id12d01cffc50399bab5711c7de4756c542873ec2
-
由 Karim Nosir 提交于
1) Reorder 2 successive adds with constants to allow constant folding. 2) Remove trivial Add/Sub/Mul/Div. PiperOrigin-RevId: 309480295 Change-Id: Ie84f5fd4da79dc80a3a5ee806051bd63b5b49888
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309478605 Change-Id: Ic2b716cf6d4b418f0c05e9224e3f974fea9cc8d7
-
由 Bruce Fontaine 提交于
PiperOrigin-RevId: 309476584 Change-Id: I250ab2972a94cd9ce7527e2fb3de215d93b15c85
-
由 HyoukJoong Lee 提交于
PiperOrigin-RevId: 309476085 Change-Id: Ice1d773353cdd58b0cfeb004f01c394677d6d3a8
-
由 A. Unique TensorFlower 提交于
Correctness > speed. And using log1p improves accuracy significantly, especially of gradients. *** Original change description *** Partial rollback of change to softplus functor. The large speedup observed was only true for arguments greather than ~5. For arguments less than ~5, a significant slowdown would occur. PiperOrigin-RevId: 309472556 Change-Id: I015752231ebe7eb961c624fdc3d3a8e2f57101c6
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 309472392 Change-Id: Ic8a8702f699e2138e5d7a4145eba6c15decf645f
-
由 Peter Hawkins 提交于
[StreamExecutor] Allow HostExecutor users to control the stack sizes of threads used for HostStream via. Also include non_portable_tags in the keys used when creating an Executor. There seems to be no good reason that it is omitted. Will fix https://github.com/google/jax/issues/432 when included in a jaxlib release. PiperOrigin-RevId: 309472318 Change-Id: Ia2535616047390d6bf6f2da82a666a321dcc9f5d
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309471787 Change-Id: If49367d3f6a56bd5fa361d8c23627e631c9feb8f
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 309471131 Change-Id: I1085115abff5630b1ce94ef951a75d4f82bdc862
-
由 Raman Sarokin 提交于
PiperOrigin-RevId: 309469850 Change-Id: I7054a3238dfbd9d94efb506e11d7ee91d811edd4
-
由 Advait Jain 提交于
PiperOrigin-RevId: 309462718 Change-Id: I11e53b12b0b7ee7d6725a547c8016123d7f6682b
-
由 Anjali Sridhar 提交于
PiperOrigin-RevId: 309458102 Change-Id: I6b406a6b17726e409ab4798f40dd5f0791c696db
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309457859 Change-Id: I44bbc5804bb51b8248a52e9b2a0e863641905d17
-
由 Shanqing Cai 提交于
This is related to https://github.com/tensorflow/tensorboard/pull/3564 1. Add DebuggedGraph.get_op_creation_digest() 2. Remove DebuggedGraph.get_op_type(), which is superseded by DebuggedGraph.get_op_creation_digest() and is not used anywhere. 3. Add DebuggedGraph.add_op_consumers() and DebuggedGraph.get_op_consumers() to enable efficient tracking of the downstream consuming ops of a graph op. 4. Add host_name and stack_frame_ids to data class GraphOpCreationDigest. PiperOrigin-RevId: 309455936 Change-Id: I104084c1ef8b887f69733702a2f4c3190fa5402f
-
由 A. Unique TensorFlower 提交于
This CL introduces TfLiteHexagonDelegateOptionsDefault() method that returns an instance of TfLiteHexagonDelegateOptions with default values. Without an API for defaults, every new field added will break consumers of hexagon delegate, as they will be filled with garbage values. This is also similar to TfLite GPU delegate API. PiperOrigin-RevId: 309453808 Change-Id: I12275991b34f175e2d1abbe40068287238f44684
-
由 Haoyu Zhang 提交于
PiperOrigin-RevId: 309452719 Change-Id: I1ac91638464b2a3a1914cee5cadb0353b307e6df
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 309451157 Change-Id: Ie70aa64a31a832bd65e75d8d7a46895ee99cf00c
-
由 Allen Lavoie 提交于
Previously the context was deleting some objects before custom devices had a chance to release their references. This change just moves custom device destruction to be a bit earlier. PiperOrigin-RevId: 309450790 Change-Id: Id95d7918c5cbe87f6948ef302d6373f55bbb13ab
-