- 01 11月, 2020 2 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 340071266 Change-Id: Ic21209a25a1f8efa1122c9cee4a8ab3b8043c308
-
由 Meghna Natraj 提交于
PiperOrigin-RevId: 340033160 Change-Id: Ia7007ddb1bcd128c684712eafa284af79fedac05
-
- 31 10月, 2020 38 次提交
-
-
由 Xinyi Wang 提交于
PiperOrigin-RevId: 340026157 Change-Id: Ibf94e17f0cd2cf88738d1d08be06cbe68a848ecf
-
由 Rohan Jain 提交于
tf.parallel_stack is implemented as a graph rewrite and therefore has no support in eager mode. Added documentation to parallel_stack and raising an error now to reflect that. PiperOrigin-RevId: 340016749 Change-Id: I7c695dfb98549bbc56105592b1b1c6b1ea06b231
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 340005180 Change-Id: Id2be30ca6dc141ef6c51113e0ef915d03568ba3b
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 340005178 Change-Id: Ic63905f93733940c57d64156c502b95f067bba4f
-
由 Yuanzhong Xu 提交于
PiperOrigin-RevId: 339989040 Change-Id: I46eaffb4e4b28c168b5f0182c4c0bb946d6eefe8
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 339983364 Change-Id: I877f2394f13b899ace0bb2893e6cb5f073b03458
-
由 Terry Heo 提交于
Created a CMakeLists.txt under the lite/examples/minimal Usage) $ git clone https://github.com/tensorflow/tensorflow.git tensorflow_src $ mkdir minimal_build $ cd miniaml_build $ cmake ../tensorflow_src/tensorflow/lite/examples/minimal $ cmake --build . -j PiperOrigin-RevId: 339977906 Change-Id: I85cada5430407e72354d0a31a17584c48407db2f
-
由 Berkin Ilbeyi 提交于
When processing buffers that are placed in tuples, we create (possibly redundant) additional tuple and gte instructions, which we later simplify. We don't need to create these additional instructions for buffers that were unchanged after memory space assignment. We were facing compile time and OOM issues without this change. PiperOrigin-RevId: 339972236 Change-Id: Ieb82767ed52d39260141f3e3179387fbf5783920
-
由 Raman Sarokin 提交于
ConvConstants extended for better mapping on thin dst tensors. PiperOrigin-RevId: 339971199 Change-Id: Ia14be12361ec81c70c0522baa9516c8cb28ba7ff
-
由 Raman Sarokin 提交于
ConvolutionTransposed3x3 generation changed to use storage type properties instead of specific storage types. PiperOrigin-RevId: 339967056 Change-Id: I5f6e192e21cbabf33bd152f91f4ab664b139fd14
-
由 Rick Chao 提交于
Multi-worker tutorial: Add the workflow of MWMS+CTL example that is going to be added in the tutorial in multi_worker_tutorial_test. Fix the flakiness of the test and re-enable in TAP. PiperOrigin-RevId: 339966024 Change-Id: Icb866f8a7054fa88f2e474c02960982a57c542b3
-
由 Allen Lavoie 提交于
nest.flatten was mangling some attribute lists Fixes #40895. PiperOrigin-RevId: 339965951 Change-Id: Ibdbbb5a12b2d06a077b62902764ca4849a2b8d2a
-
由 Raman Sarokin 提交于
DepthwiseConvolution generation changed to use storage type properties instead of specific storage types. PiperOrigin-RevId: 339964553 Change-Id: I2cded9c306a40b136002c08e610daca2d75e1758
-
由 Raman Sarokin 提交于
Added A14 GPU to enum. PiperOrigin-RevId: 339964411 Change-Id: I13439c83c6f3d0f75a166b950a5f088843dd2023
-
由 Meghna Natraj 提交于
PiperOrigin-RevId: 339959753 Change-Id: I492d1f750e7a6fd97d4fdccbb838492e699c56b6
-
由 Anna R 提交于
PiperOrigin-RevId: 339956677 Change-Id: Id4cea5f63b3f257170c121d74e9cb2995a10b629
-
由 Meghna Natraj 提交于
PiperOrigin-RevId: 339954904 Change-Id: Id9f6717da5c32bff185a10a37e9682be64cc6501
-
由 Nat Jeffries 提交于
PiperOrigin-RevId: 339953940 Change-Id: I568589fc09fa6e5aae137e7e8533e1dab86190d3
-
由 Meghna Natraj 提交于
PiperOrigin-RevId: 339951415 Change-Id: Ic3bbb2e474a4fb287fb48f24509beb44625ee2e4
-
由 Anna R 提交于
PiperOrigin-RevId: 339951343 Change-Id: I02b02143721f7c656af8c1d19176dc44f3bffa20
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 339948109 Change-Id: Ia2612efa66b48ac5784d2a53cdeb0ce9001f4025
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 339946530 Change-Id: I9ff4d4a380b6f2c2181edfc76cbb41af09abf0e2
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 339944681 Change-Id: Ifd0f43a2616328fe54819fae18ee626a44556e99
-
由 Ruoxin Sang 提交于
Dataset cardinality op should be placed on the same host device as the dataset, otherwise TF will try to encode/decode the dataset to another host device, which is not possible. PiperOrigin-RevId: 339942622 Change-Id: Ifbeb0aabdfc35f6c0e1248d69a8d22727e19c4a2
-
由 Yunxing Dai 提交于
This further simplifies ShapedBuffer object as itself doesn't have any logic to use the platform field. Notice that this cl also removed some sanity check in allocation tracker. we can add that sanity check back if need --- just keep track of `platform` inside of allocation tracker as a side map. PiperOrigin-RevId: 339938197 Change-Id: I090e603927ed3fccdb51254f972b3af2e1ec1470
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 339937575 Change-Id: I1d80b244adcb27a429bc09c63434847107d23bee
-
由 Rick Chao 提交于
PiperOrigin-RevId: 339929158 Change-Id: I8592aa6e2cec32a2ba97743a6f022f263f0f65e2
-
由 Hongmin Fan 提交于
TFRT batch fallback kernel to crash in high QPS load. The bug makes it only create an object of base class BatchTask even when splitting a task of the derived class FallbackBatchTask (used only in TFRT), and put it into a batch with other FallbackBatchTask objects. When this batch of mixed types of tasks is processed, it crashes. PiperOrigin-RevId: 339927837 Change-Id: Ie52bd11c61c9ddbe6ab803cd90208419d4b2dba6
-
由 Anna R 提交于
PiperOrigin-RevId: 339922525 Change-Id: I88c0d616154484ddd353d262f2fe7408cfc058c4
-
由 Katherine Wu 提交于
#KERAS_PRIVATE_API_CLEANUP PiperOrigin-RevId: 339917969 Change-Id: Iadd13a8d23a941528e0384e090dc07ddc9d63da6
-
由 Thomas O'Malley 提交于
With this change, it is now possible to mix-and-match tf.keras.Layers and tf.Modules inside a tf.keras.Model and everything will be tracked properly. - Variables in tf.Modules that are set as attributes of custom Layers and Models now show up properly in properties such as Layer.trainable_variables and Model.trainable_variables. - tf.Modules do not show up in Model.layers. Instead, a new method Layer._flatten_modules is added that iterates over tf.Modules and Layers in the order that Keras expects. The existing method Layer.submodules (inherited from tf.Module) can still be used to iterate over tf.Modules and Layer with the tf.Module ordering. Layer._flatten_layers is built on top of Layer._flatten_modules - Layer._layers is renamed to Layer._self_tracked_trackables to avoid naming conflicts with user-defined attributes (and to reflect that this attr contains Layers, Modules, and TrackableDataStructures) - A new property is added to tf.Module to enable this, namely tf.Module.non_trainable_variables PiperOrigin-RevId: 339917644 Change-Id: I96a7302745280a6261de8c4295c5cbf5f4d7dd5c
-
由 Jiho Choi 提交于
PiperOrigin-RevId: 339915378 Change-Id: I5675fdf910a3d8fd550d28a5382f0da20e450281
-
由 Raman Sarokin 提交于
PiperOrigin-RevId: 339913277 Change-Id: If4a92b3fafe922b5abf00bc8dc04deef1ef6ca6d
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 339911119 Change-Id: Iddf765ff3e58ea67c0cdf3b1ffe3065a92e8dcfc
-
由 Advait Jain 提交于
This change avoids an implicit conversion from enum BuiltinOperator to int8_t for calls to CreateOperatorCodeDirect. This should mean that once the BuiltinOperator enum is larger than a byte, the TFLM code does not get tripped. PiperOrigin-RevId: 339910848 Change-Id: Ief978f2bb9ea818f8211ade3927baa35dc7a8512
-
由 Katherine Wu 提交于
Add property that allows layers to specify that the input_spec can also be used as the layer call function's input_signature. By default,all Keras exported layers will have this property return True, since Keras more rigidly defines the input_spec shape (compared to user-defined models, which may only have `ndims` set). PiperOrigin-RevId: 339909813 Change-Id: I1b486747aa1e413e6f24f6809fb88846ad4712ab
-
由 Scott Zhu 提交于
PiperOrigin-RevId: 339909803 Change-Id: I7cbcc1d74c0de2cd7f86e10e2b44f0d4b81274a0
-
由 Jose Baiocchi 提交于
PiperOrigin-RevId: 339909494 Change-Id: I1a9352342e9b79b55b5ba1f324fecf16a34b7647
-