- 18 3月, 2020 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 301443563 Change-Id: I852269b86039a71466ddeadfe3ce03d75dc45fda
-
- 06 3月, 2020 1 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 299110761 Change-Id: I66ecaa9d01dc441f091888bef3f24d220e9180c5
-
- 07 2月, 2020 1 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Adding information necessary for reconstructing the input pipeline graph from TraceMe metadata. PiperOrigin-RevId: 293728779 Change-Id: Ib06750c4c360db603c00eb2133ee936c243fdf88
-
- 26 11月, 2019 1 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 282386008 Change-Id: I14c1ac544da14e536855e85a455b6cb4ed885467
-
- 17 8月, 2019 1 次提交
-
-
由 Ihor Indyk 提交于
[tf.data] Adds an upper bound for the total buffer limit of the model in `Model::Optimize` as % of available RAM. PiperOrigin-RevId: 263789432
-
- 08 8月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
This CL: - removes unused `DatasetBase::Save()` and related tests - replaces `SerilizationContext::optimization_only` with multiple functionality specific flags (`check_external_state`, `fail_if_unimplemented`, and `serialize_data_tensors`) - introduces `DatasetBase::CheckExternalState` as an error-raising replacement for `DatasetBase::IsStateful` to make it possible to communicate the reason for why serialization failed through the error status - adds `IteratorBase::SaveInternal` and `IteratorBase::RestoreInternal` in preparation of making these methods pure virtual PiperOrigin-RevId: 262235093
-
- 26 7月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
After this change, restoring an iterator from a checkpoint will require that the iterator is initialized using a dataset that matches the dataset used for initializing the iterator used to create the checkpoint. In other words, if the Python definition of the input pipeline changes, the restoration of the iterator will fail. The motivation for this change is to make it possible to save (and restore) datasets whose graph cannot be serialized (e.g. because it contains ops with resource inputs). This will in turn allow tf.data to implement "reshuffle each iteration" or in-memory caching between different Python iterator for the same dataset. PiperOrigin-RevId: 260144783
-
- 25 6月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 254812919
-
- 04 4月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Adjusting auto-tuning period to 1 minute (from previous incorrect value of 1000 minutes) and improving auto-tuning logging. PiperOrigin-RevId: 241826050
-
- 14 3月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Exposing an option for specifying the CPU budget for autotuning parallelism and nesting the autotuning-related options under experimental_optimization. PiperOrigin-RevId: 238369191
-
- 02 3月, 2019 1 次提交
-
-
由 Derek Murray 提交于
The previous implementation of many core `tf.data` transformations (e.g. `Dataset.prefetch()`) would create one or more threads each time an iterator over those datasets is created (e.g. `ds.prefetch(N).repeat(100)` would create and destroy 100 threads). In addition to the overhead of thread creation, this interacts poorly with some malloc implementations, and can contribute to memory fragmentation. The new implementation maintains an unbounded pool of physical threads in each iterator (or `MultiDeviceIterator`) resource, and returns logical "threads" to that pool when their work is complete instead of exiting from them. PiperOrigin-RevId: 236413014
-
- 15 1月, 2019 1 次提交
-
-
由 Jiri Simsa 提交于
This CL: - adds counters for tf.data elements, autotuning and optimizations - sets the number of iterations of the `tf_data_meta_optimizer` to one -- the iteration of tf.data optimizations is handled by the tf.data meta optimizer itself - adds the `alwayslink` attribute to all tf.data optimization BUILD targets to make sure they are always registered (without this, they would not be registered for the Tensorflow server binary I was using for local testing) and further cleans up visibility and dependencies of //third_party/tensorflow/core/grappler/optimizers/data/BUILD - introduces TFDataOptimizerBase as a base class for tf.data optimizations - moves TensorFlow metrics into tensorflow::metrics namespace PiperOrigin-RevId: 229302097
-
- 21 12月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 226402626
-
- 05 12月, 2018 2 次提交
-
-
由 Steve Nesae 提交于
-
由 Jiri Simsa 提交于
[tf.data] Adding `tf.data.experimental.cardinality()` which provides information about dataset cardinality. PiperOrigin-RevId: 224030418
-
- 09 11月, 2018 1 次提交
-
-
由 A. Unique TensorFlower 提交于
The wrapped iterator's GetNext method is thread-safe. Calling it with mu_ exclusively locked serializes calls to impl_->GetNext(). PiperOrigin-RevId: 220799359
-
- 06 11月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 220125067
-
- 31 10月, 2018 2 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Refactoring of performance modeling implementation and adding performance modeling for all core and experimental tf.data kernels. PiperOrigin-RevId: 219406929
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 219390881
-
- 26 10月, 2018 1 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 218765742
-
- 09 10月, 2018 2 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 216260575
-
由 Derek Murray 提交于
PiperOrigin-RevId: 216247929
-
- 04 10月, 2018 2 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 215607038
-
由 Derek Murray 提交于
PiperOrigin-RevId: 215579950
-
- 21 9月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Moving auto-tuning optimizations into a background thread, refactoring the API for exposing tunable parameters, and removing `model::Node` from the public API. PiperOrigin-RevId: 213907565
-
- 18 9月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
[tf.data] Adding support for `tf.data.AUTOTUNE` as a special value for the `num_parallel_calls` argument of `tf.data.Dataset.map()`, `tf.data.Dataset.interleave()`, and `tf.contrib.data.map_and_batch()`. When `tf.data.AUTOTUNE` is specified, the level of parallelism is determined at runtime. The underlying mechanism instruments the input pipeline to build a performance model and then uses the model to find the optimal values for the parallelism knobs. PiperOrigin-RevId: 213283297
-
- 12 9月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 212557406
-
- 06 9月, 2018 1 次提交
-
-
由 Derek Murray 提交于
PiperOrigin-RevId: 211733735
-
- 14 8月, 2018 1 次提交
-
-
由 Jiri Simsa 提交于
- replacing `OpKernelContext` with newly introduced `DatasetContext` in `DatasetBase` constructor to make it possible to instantiate `DatasetBase` in places where an instance of `OpKernelContext` is not available - replacing `dataset::MakeIteratorContext(OpKernelContext* ctx)` factory with `IteratorContext(OpKernelContext *ctx)` constructor. - folding `GraphDatasetBase` into `DataseBase` and removing the default implementation of `AsGraphDefInternal`, making it the responsibility of the derived class to implement it to encourage/hint developers to provide serialization logic PiperOrigin-RevId: 208560010
-
- 11 8月, 2018 2 次提交
-
-
由 Jiri Simsa 提交于
This CL: - changes the `OptimizeDataset` checkpointing logic to checkpoint the optimized dataset (as opposed to the original dataset + the optimizations, re-running optimization every time a checkpoint is restored) - replaces `OpKernelContext` with newly introduced `SerializationContext` in the signature of `AsGraphDefInternal` to reduce the scope of the context and also simplify the logic for overriding the `FunctionLibraryDefinition` when optimizations take place PiperOrigin-RevId: 208282562
-
由 Jiri Simsa 提交于
Renaming `AddParentDataset`, `SaveParent`, and `RestoreParent` to `AddInputDataset`, `SaveInput`, and `RestoreInput`. PiperOrigin-RevId: 208272695
-
- 01 6月, 2018 2 次提交
-
-
由 Brennan Saeta 提交于
By marking DebugString() as const we can make some error messages more descriptive. Because DatasetIterator marks the return value of the dataset() function const, DebugString() cannot be called. PiperOrigin-RevId: 198796894
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 198772254
-
- 08 2月, 2018 1 次提交
-
-
由 Derek Murray 提交于
This change moves the `OpKernel` and `DatasetBase` implementations to "tensorflow/contrib/data/kernels", where they are packaged as a custom op library. This demonstrates (and enforces by continuous integration) the ability to build a C++ Dataset implementation in a custom op library. Other contrib Dataset implementations will move in subsequent changes. PiperOrigin-RevId: 184938885
-
- 12 1月, 2018 1 次提交
-
-
由 Derek Murray 提交于
The IteratorContext type contains all of the state needed to restore an iterator and it is easier to construct, so this change will make it possible to control the environment of the restoration more easily (e.g. when using function overlays on a shared runtime). PiperOrigin-RevId: 181662977
-
- 15 12月, 2017 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 179112798
-
- 23 11月, 2017 1 次提交
-
-
由 Shivani Agrawal 提交于
PiperOrigin-RevId: 176715082
-
- 18 11月, 2017 1 次提交
-
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 176190698
-
- 17 8月, 2017 1 次提交
-
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 165500724
-
- 01 7月, 2017 1 次提交
-
-
由 Derek Murray 提交于
This transformation acts as the identity function on an input dataset, except that any errors that arise when computing an input element are silently dropped. It may be useful when an input pipeline contains ops that perform I/O and/or parse unclean data. PiperOrigin-RevId: 160658678
-