- 22 2月, 2021 10 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358764977 Change-Id: I0723873de31eb5637e1cb6f028e00f6dc8ffa6cd
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358764974 Change-Id: I6262768f6179528378491ca775626964df7affbd
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358764677 Change-Id: I7026ac8570c0c62c8c62d2f8ab4afd8b236ea3da
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358744654 Change-Id: I932d8ab7bc973d03dfa2854f75842abd5e321685
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 358744545 Change-Id: I6586d4986ed5777b2f7130d757c9638996bef789
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358736975 Change-Id: If576c43feedd160d969d7a6881890501a63cc110
-
由 A. Unique TensorFlower 提交于
We approximate it with: expm1 = tanh(x/2)*(exp(x)+1) Additional care is taken to handle the case when x/2 underflows but x does not by simply approximating the result with x itself. This suffices to get us within a relative error of 5e-7 or about nine ULPs when compared against libm. PiperOrigin-RevId: 358736039 Change-Id: I53a49e929edab1e3ba671b62b3d27495888c5011
-
由 David Majnemer 提交于
We approximate it with: expm1 = tanh(x/2)*(exp(x)+1) Additional care is taken to handle the case when x/2 underflows but x does not by simply approximating the result with x itself. This suffices to get us within a relative error of 5e-7 or about nine ULPs when compared against libm. PiperOrigin-RevId: 358734815 Change-Id: Iac7e397efc9a547299c7c9b7f932fcbb622ea842
-
由 Karim Nosir 提交于
PiperOrigin-RevId: 358711627 Change-Id: I5606b935b2f2e4e063c2b57e148772b2425488a8
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 358705824 Change-Id: Id5ba77394d99fe33521ba65e8d8d571a6ef0fb16
-
- 21 2月, 2021 21 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358651976 Change-Id: Idaea72dffa4c0f6b41c981d3654822799f0b5c0c
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358651970 Change-Id: I60dd1ecef73fb9eabf6adb5d715fb2e5b414eea5
-
由 Yuefeng Zhou 提交于
PiperOrigin-RevId: 358651724 Change-Id: Ia66fd8780e874fe7da96a153b763deb135886e95
-
由 Bixia Zheng 提交于
Remove a NodeDef parameter from a few routines. PiperOrigin-RevId: 358637036 Change-Id: Id7c911684e0866737b964feebeffbe30e064e671
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358626694 Change-Id: I81da84cbed8a579a978452c973e5f0427764e7c1
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358625154 Change-Id: Idd05505190d538d8e101a8209567c777c10a4434
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358625098 Change-Id: I16599700f7e3927948b1c90c88426cec915dfc89
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358624802 Change-Id: Ibb83fe391adf109aaffd2e012ad11f75dab6299c
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358623335 Change-Id: Iebef1615799ca81177a944e8129ce8e07094fd60
-
由 Renjie Liu 提交于
This is blocking the uinidirectional_sequence_lstm shape inference. PiperOrigin-RevId: 358623334 Change-Id: Idc0bac03ce61f269908abf91d3437f334b6f47cd
-
由 Jacques Pienaar 提交于
It is expensive to grab the file:line:col address when just parsing type. PiperOrigin-RevId: 358611715 Change-Id: I5af18a0d462876d02731764cc1c356005188dd44
-
由 Jacques Pienaar 提交于
This results in a blow up due to uniquing tuple types intermediate values. PiperOrigin-RevId: 358611399 Change-Id: I4e24a7c3d7afc1d5ad3fc9cd7dc432ba58b6ecf1
-
由 Rick Chao 提交于
PiperOrigin-RevId: 358610480 Change-Id: Idbd564c664e8a3d7e62404e1aafe9bc85994129b
-
由 Rick Chao 提交于
PiperOrigin-RevId: 358610102 Change-Id: I0c86858cf82bf9cd81c989309c308507d9016ef5
-
由 Jian Li 提交于
Relax the symmetric quantization constrain on int16 input for Quantize. There are other valid use cases that have non-zero zero point. PiperOrigin-RevId: 358605601 Change-Id: I3102d1419cfc237fd465df264dd336e8041f2aad
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358603186 Change-Id: I53afbd10a16a5d22784d4fb9c128213494881bd6
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358602910 Change-Id: I2355fa696cd674d3119e5006a32e78b8a5666e4e
-
由 Renjie Liu 提交于
PiperOrigin-RevId: 358602878 Change-Id: I814b578c07dc247e8a252fc66cefca9fbcfa3cb1
-
由 A. Unique TensorFlower 提交于
This makes it possible to do an int8 x int8 -> int32 operation. PiperOrigin-RevId: 358596073 Change-Id: I9fb8108c44eb6eff2f27632b7f86813707a0e86d
-
由 David Majnemer 提交于
This makes it possible to do an int8 x int8 -> int32 operation. PiperOrigin-RevId: 358595582 Change-Id: Ic15ffe41202e8d15eac5222585c516552ccc84ef
-
由 Jay Shi 提交于
[tf.data] Add varying sleeping time in the benchmark to better simulate the actual behavior of input pipelines. PiperOrigin-RevId: 358594865 Change-Id: Ia386e46a56dba1b812a66f6bc5576f90dc082817
-
- 20 2月, 2021 9 次提交
-
-
由 Prakalp Srivastava 提交于
`mhlo.dot_general` and `mhlo.convolution` result element type might be different from operand element type. See `preferred_element_type` attribute that allows i8xi8 to i32 dot computation. `mhlo` to HLO exporter should pass the result element type to Xla builder to override the shape inference of XLA. PiperOrigin-RevId: 358580718 Change-Id: If3ad34b6824a52498663f0a1a031a5bdc29a24ee
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358555614 Change-Id: I601a64ef0cdb53564397e9644f529a92132497c2
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 358555610 Change-Id: I1ef2f9fc4612d5596a5f3942817298447d9c0258
-
由 Haoliang Zhang 提交于
Use scoped diagnostic handler to capture diagnostics within the `lower_static_tensor_list` pass when `allow_tensorlist_pass_through=true`. PiperOrigin-RevId: 358555572 Change-Id: I3545278631d992e4eb8640d7cbde3d7736c012a3
-
由 Jaesung Chung 提交于
* Sharing TensorFlow Resource and Variant Tensors across multi-subgraphs The existing Flex delegate handles the cases for the multi-partitions in a subgraph well but when sharing tensors across subgraphs, we need to be careful to implement since resource and variant tensors are RAII objects, which means that the memcpy does not work on them. Other value-based tensor formats, numbers and strings, can be easily a format of memory data and easily copied around subgraphs by just copying data. Sharing the pointer of resource and variant tensor should be done instead. * Sharing TensorFlow tensor pointer across multi-subgraphs To do that, Flex resource and variant buffer format is just a container for storing the TensorFlow tensor object, which refers to TensorFlow resource or variant tensor like the below code snippet. struct OpaqueBuffer { // Store a TensorFlow's tensor pointer. The life cycle of the pointer will be // managed by the reference counting in the TensorFlow world and the pointer // will be freed when all the buffer maps, who own it, are gone. const tensorflow::Tensor* tf_tensor; }; * Life cycle of the shared TensorFlow tensor object When TFLite runtime requires those tensors to be transferred to the other subgraphs, TFLite runtime invokes the delegate interface, the CopyFromBufferHandle method. The Flex’s CopyFromBufferHandle method will create the above OpaqueBuffer structure and store the corresponding TensorFlow tensor pointer. Later, the other subgraphs will find the TensorFlow pointer inside and insert it to their buffer map to track the life cycle in the subgraph. Even though the same TensorFlow tensor object can be found in the multiple buffer maps in the Flex delegate. They will be freed when the Flex delegate is finalized. PiperOrigin-RevId: 358553465 Change-Id: I7cb056bf8f216851c771d7b2f26e69821a7cbf6a
-
由 A. Unique TensorFlower 提交于
Imported from GitHub PR https://github.com/tensorflow/tensorflow/pull/46551 This PR adds cudaMallocAsync as an option when CUDA 11.2 is used. PiperOrigin-RevId: 358553125 Change-Id: Id7110f54838fafb4107f06ed1d68ce0245010a3a
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 358545703 Change-Id: I46aa9403023db7e1024350593d48d18bd30a2a2e
-
由 Frank Chen 提交于
PiperOrigin-RevId: 358540864 Change-Id: Ie5055e71a7cceffe3534be769cd8d5e23f18cae2
-
由 Tomer Kaftan 提交于
Reduce the number of private symbols Keras relies on, by switching specific usages of Trackable data structures to the generic `wrap_or_unwrap` method that decides automatically according to the input type. PiperOrigin-RevId: 358532293 Change-Id: Id85a1082f1a9bfc9b6e025c57fb078e960e0db8b
-