- 02 8月, 2018 40 次提交
-
-
由 melvinljy96 提交于
Add new links and descriptions about Tensorflow. The reasons is to make it more user-friendly and provide more informations to all the users so that they can get the informations quickly and learn more faster.
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207053503
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207051885
-
由 Sanjoy Das 提交于
And fix two lint issues. PiperOrigin-RevId: 207051473
-
由 Derek Murray 提交于
The `Optional` type makes it possible to represent missing values (e.g. an attempt to run `Iterator.get_next()` after the sequence has ended) without raising an error. NOTE: The `Optional` type is currently only supported on CPU, and a follow-up change will add support for other devices. After then, we will add this to the `tf.contrib.data` API, with a view to eventually migrating it to core. PiperOrigin-RevId: 207049979
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207045468
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207034363
-
由 Allen Lavoie 提交于
save_weights in HDF5 format does not save optimizer weights anyway, but since TensorFlow optimizers are saved in TensorFlow format it's a bit surprising when Keras optimizers aren't. PiperOrigin-RevId: 207027546
-
由 Derek Murray 提交于
Refactors a method that uses `OP_REQUIRES[_OK]` heavily to return a `Status` in error cases instead. PiperOrigin-RevId: 207027527
-
由 A. Unique TensorFlower 提交于
Allow to set global step to a particular value, after the early stopping triggered by the number of trees fired. PiperOrigin-RevId: 207024504
-
由 Jared Duke 提交于
PiperOrigin-RevId: 207020196
-
由 Jared Duke 提交于
PiperOrigin-RevId: 207016946
-
由 Francois Chollet 提交于
PiperOrigin-RevId: 207016849
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 207014665
-
由 Jared Duke 提交于
The Unity TFLite plugin should now run successfully on Mac, though it might require renaming `libtensorflowlite_c.so` to `tensorflowlite_c.bundle` in the Plugins folder. PiperOrigin-RevId: 207014537
-
由 A. Unique TensorFlower 提交于
important that the Eager Runtime remains compatible with Android. For now only eager:execute is built since this is the main target TF Lite will depend on. PiperOrigin-RevId: 207012943
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207010324
-
由 Justin Lebar 提交于
It's possible for an already-existing context to be returned by cuDevicePrimaryCtxRetain. Previously, this would be handled incorrectly by CreatedContexts::Add, which was assuming that inserts into the map always succeeded. This makes XLA work with TF_CUDA_PLATFORM_GPU_DEVICE_SCHEDULE=blocking_sync, although exactly how that flag is related to this bug is unclear to me. It seems like some sort of race condition, maybe? PiperOrigin-RevId: 207010059
-
Generalize quantization rewriter to handle seperable convolutions. Insert fake quant ops for weights in both depthwise and regular convolutions inside a seperable convolution op. Also insert fake quant ops for activations produced by first depthwise convolution PiperOrigin-RevId: 207009650
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 207008537
-
由 Asim Shankar 提交于
instead of a "char*". (See https://docs.python.org/3/whatsnew/3.7.html#c-api-changes) There are additional changes needed for Python 3.7 compatibility, this change just pulls out one of them (and subsumes a related attempt in #21202 and #20766) Helps with #20517 PiperOrigin-RevId: 207008013
-
由 A. Unique TensorFlower 提交于
When model.build() is called on tf.TensorShape((None, None, None, 1)), the code replaces the None values with 1 and the model is then built with the shape of (1, 1, 1, 1). This sets the variables of the model and hence we cannot call the model on input of shape other than (1, 1, 1, 1). In this CL, we create placeholders for the None values and build the model in graph mode. Since tf.Variable is now compatible with both eager and graph mode, the variables created after building the model in graph mode are still valid in eager mode. Now we can build the model with None's in the input shape and the model can still be called on a different shape input due to placeholders. PiperOrigin-RevId: 207005479
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 207005345
-
由 Shivani Agrawal 提交于
Use object-based save/restore to make dataset/iterator checkpointable in both graph as well as eager mode. PiperOrigin-RevId: 206998349
-
由 Yuefeng Zhou 提交于
PiperOrigin-RevId: 206998261
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 206997688
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 206995432
-
由 Mingsheng Hong 提交于
suggestion from apassos@ -- the underlying lib->Instantiate() does the caching. PiperOrigin-RevId: 206993242
-
由 Sourabh Bajaj 提交于
Output the PerDevice variables from tpu_result and reduce them correctly on the host to yield a single scaler value from the reduce function. PiperOrigin-RevId: 206990072
-
由 Benjamin Kramer 提交于
Fermi and below are not supported by Cuda 9 which is the oldest Cuda supported by XLA. PiperOrigin-RevId: 206989869
-
由 Allen Lavoie 提交于
Works with the one-shot head (no model state in the tf.Example proto). PiperOrigin-RevId: 206988925
-
由 A. Unique TensorFlower 提交于
Move GetDeviceStates() and GetNodeState() functions from protected to public, so that the user can have access to more detailed results from VirtualScheduler. PiperOrigin-RevId: 206986812
-
由 A. Unique TensorFlower 提交于
output_multiplier > 1. #20451 #19607 PiperOrigin-RevId: 206983654
-
由 Mingsheng Hong 提交于
when we only run that If op for its side effects (e.g. enqueuing tensors.) Also extended the kernel impl to handle the case where the kernel works with multiple functional libs through its lifetime (b/37549631). The code is modeled after the WhileOp kernel impl. An example graph function that runs If for its side-effects is: function { signature { name: "S12control_flow23testTensorEnqueueInCondyySb_SftF.tf_CPU.device_partition" input_arg { name: "arg_0" type: DT_FLOAT # DT_BOOL } } node_def { name: "op/testTensorEnqueueInCond.14.14" op: "Const" device: "/device:CPU:0" attr { key: "dtype" value { type: DT_FLOAT } } attr { key: "value" value { tensor { dtype: DT_FLOAT tensor_shape { } float_val: 1 } } } } node_def { name: "op/testTensorEnqueueInCond_5.22.3" op: "If" input: "arg_0" input: "op/testTensorEnqueueInCond.14.14:output:0" attr { key: "Tcond" value { type: DT_FLOAT # DT_BOOL } } attr { key: "Tin" value { list { type: DT_FLOAT } } } attr { key: "Tout" value { list { } } } attr { key: "else_branch" value { func { name: "false/testTensorEnqueueInCond_4.22.3" } } } attr { key: "then_branch" value { func { name: "true/testTensorEnqueueInCond_3.22.3" } } } } } PiperOrigin-RevId: 206983563
-
由 Yuefeng Zhou 提交于
1) move some common utils to this test base 2) rename task_index to task_id PiperOrigin-RevId: 206981192
-
由 Sanjoy Das 提交于
aligned_buffer_bytes in compiler/aot/runtime.cc was checking sizes[i] == -1 (as opposed to checking sizes[i] < 0) to decide whether sizes[i] should count towards the total size. Original CL description: Overhaul XLA:CPU's calling convention. This CL introduces a clean separation between calls to "thread local" and "global" computations in XLA:CPU. Global computations are: - kWhile body and condition computations - kConditional true and false computations - kCall callees Parameters and results buffers for these calls are assigned a static BufferAllocation::Slice by buffer assignment and so they don't require pointers to result buffers and parameters to be explicitly passed in. In fact, passing in result and parameters buffers is actively misleading because in cases like: while_condition { val = (s32[], pred[]) infeed() ROOT result = get-tuple-element(val), index=0 } there is no instruction explicitly copying the result of the computation into the result buffer. Instead, it is up to the caller to pick up the correct result buffer by asking buffer assignment (which would be buffer where infeed wrote its second tuple component). Thread local computations are all the other nested computations except fusion, e.g. computations used by kMap and kReduce. Parameters and result buffers for these calls are assigned a "thread local" BufferAllocation::Slice which in XLA:CPU are mapped to allocas. Since these are not static addresses, we *do* need to pass in parameter and result buffers. The output is written to the result buffer by "allocating" the storage for the root into the result buffer passed in by the caller. There are two cleanup items that I kept off this CL to make reviews easier: - We should rename "temps" to something more generic, like "buffer_table". I'll do that in a followup CL. - We should use GatherComputationsByAllocationType from buffer_assignment.cc to CHECK that we use thread local calls for thread local callees and global calls for global callees. PiperOrigin-RevId: 206980796
-
由 Amit Patankar 提交于
packages instead. PiperOrigin-RevId: 206978028
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 206973087
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 206972475
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 206967298
-