- 17 7月, 2019 40 次提交
-
-
由 Benjamin Barenblat 提交于
PiperOrigin-RevId: 258486264
-
由 A. Unique TensorFlower 提交于
- changes v2 loop to use dataset iterator creator more in line with what we had been planning, so functions explicitly take iterators as inputs instead of functions that return iterators, and aren't excessively retraced. - separates out simplified train/test/predict_on_batch methods & extra utilities, for a cleaner separation between v1 & v2 code. This also allows better fallbacks, because run_distributed no longer has to force model._distribution_strategy to be set. E.g. the *_generator methods fall back correctly. - adds more fallbacks for now (e.g. Keras sequences, and cloning arg = true) - Changes the v2 _get_iterator_input_utility to work with multi io subclass models, including models where the input dataset contains dicts mapping input name to value. - Allows train_eager to work w/ skipped output indices. (fixes gru_test & lstm rewrite test). PiperOrigin-RevId: 258484595
-
由 Justin Lebar 提交于
Division by a constant compiles down to multiply and shift, and we consider those operations to be cheap. PiperOrigin-RevId: 258481139
-
由 Allen Lavoie 提交于
Fixes is-function checking and relaxes attribute lookups when a custom op isn't (properly?) registered with Python. PiperOrigin-RevId: 258480818
-
由 Allen Lavoie 提交于
Apparently it's getting a FunctionDef with duplicate output names. The names only matter for importing Defuns, so ignoring the issue seems fine for now. PiperOrigin-RevId: 258477786
-
由 Justin Lebar 提交于
Previously OperandElementUse did not know about kGather, so it incorrectly said gather reused both of its operands. In fact it does a plain elementwise use of operand 0 and does a permuting use of its operand 1. In addition, OperandElementUse previously said that bitcast was a permuting use. This isn't what we want: If a fusion uses an input twice, once directly, and once via a bitcast, that should not count as "reuse", because we're not permuting anything, we're still going to access the elements in the same order. PiperOrigin-RevId: 258476532
-
由 A. Unique TensorFlower 提交于
Read->read and write->write dependencies are not considered a safety problem, but the comment still stated they would not be clustered. PiperOrigin-RevId: 258474807
-
由 Gunhan Gulsoy 提交于
Hopefully Fixes #29617 PiperOrigin-RevId: 258473584
-
由 Yunxing Dai 提交于
PiperOrigin-RevId: 258473579
-
由 A. Unique TensorFlower 提交于
Clean up the way logging is configured and improve error checking. Make failure conditions in micro_interpreter clearer. Reduce tensor arena size in micro_vision so that it builds for Sparkfun Edge. PiperOrigin-RevId: 258473403
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258472796
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 258472358
-
由 Blake Hechtman 提交于
PiperOrigin-RevId: 258470701
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258469750
-
由 Guangda Lai 提交于
we can dump it out for TF 2.0. PiperOrigin-RevId: 258468798
-
由 Gaurav Jain 提交于
The device_mgr_ pointer is unused and unnecessarily adds a circular reference. Removing it also leads to good C++ style with: 1) Removal of DeviceMgr as a friend class 2) Removal of forward declaration 3) Removal of CHECK statement PiperOrigin-RevId: 258466138
-
由 Haoliang Zhang 提交于
PiperOrigin-RevId: 258463148
-
由 Reed Wanderman-Milne 提交于
Before, the error message would be something like "LossScaleOptimizer object has no attribute _hyper", even if the accessed attribute was not _hyper. PiperOrigin-RevId: 258462751
-
由 Tong Shen 提交于
PiperOrigin-RevId: 258460978
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 258454960
-
由 Igor Ganichev 提交于
This CL should have no behavior changes. It moves the tensor caches that were fields of eager Context into a global map indexed by context id. This CL is needed so that eager Context does not have a references to EagerTensors. Future changes will add a reference from EagerTensor to eager Context. This will mimick the ownership structure of corresponding C++ objects and ensure that Python will delete the context only after all the tensors have been deleted. The latter will allow us to simplify EagerContext destruction and remove ref counting from it. PiperOrigin-RevId: 258454245
-
由 A. Unique TensorFlower 提交于
Currently, the TFLite flatbuffer exporter only exports the shapes for inputs and constants, which is all that is required by TFLite. However, exporting more information this will improve the ability to round-trip between TFLite and MLIR and could enable the TFLite runtime to infer more information statically. PiperOrigin-RevId: 258452655
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258450422
-
由 Christopher Suter 提交于
(see, e.g., https://stackoverflow.com/questions/4825234/exception-traceback-is-hidden-if-not-re-raised-immediately) PiperOrigin-RevId: 258445844
-
由 Andrew Audibert 提交于
PiperOrigin-RevId: 258445832
-
由 Allen Lavoie 提交于
We weren't properly handling the op-type-is-function-name calling convention. PiperOrigin-RevId: 258441889
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258438818
-
由 Yifei Feng 提交于
PiperOrigin-RevId: 258438229
-
由 Yunxing Dai 提交于
Previously, we only have computation scheduler, which runs heap simulator once per computation. For models with large number of computation, this creates extremly slow compilation time. This cl introduces module scheduler, that only runs heap simulator after the whole module is scheduled. It also contains a helper function that automatically converts a computation scheduler to module scheduler. PiperOrigin-RevId: 258436352
-
由 A. Unique TensorFlower 提交于
Add code that translates between the TFLite and MLIR type systems. This begins the process of building the translator by translating the types of input arguments. The tests are updated to reflect the beginning of the actual work. PiperOrigin-RevId: 258433154
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258432123
-
由 A. Unique TensorFlower 提交于
Add a flatbuffer_importer.cc that registers a translation from TFLite Flatbuffer to MLIR and incorporate it into the flatbuffer_translate tool. The translator does not yet perform any translation, but only validates that the input file contains a FlatBuffer and prints its version number and the names and input tensor IDs of each subgraph. The tests don't actually include the expected correct output, but instead simply make sure that the initial code, which only calls the flatbuffer parser and prints some simple information, functions correctly. PiperOrigin-RevId: 258431902
-
由 Yifei Feng 提交于
In the previous change, we switched from calling importlib on a list of public APIs to using direct import statements. As a result, the list of API was not passed to TFModuleWrapper and attributes start with "_" did not show up in __all__. To fix, we manually include all eligible symbols in __all__ as what was done before. Built and tested with pip package. PiperOrigin-RevId: 258431865
-
由 Benoit Jacob 提交于
Simplify ruy's main loop. Most of the next_* business was unnecessary complication. This code didn't know whether it wanted to hide latency of an atomic increment (60 cycles) or of a block coords computation (comparable). Now it's more intentional about hiding the atomic increment latency, because that's the one instruction here that will always have high latency; for the rest, we can only hope that the compiler will exploit any opportunity to inline the block computation and distribute its instructions so as to hide some of the latency. The more important point is what while we don't really know what would run faster and that will at most make a small impact on latency, on the other hand there is a substantial code simplification here, and this matters because this is very central code. Notice in particular how the block coords computation was written twice, once before the loop body and one at the end of the loop body, and now it's only once anymore. PiperOrigin-RevId: 258429272
-
由 A. Unique TensorFlower 提交于
this is the preparation to merge the internal xprof and external oss version of annotation implementation, I need to make sure the benchmark is comparable or better. PiperOrigin-RevId: 258425457
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 258422018
-
由 Ian Langmore 提交于
this doesn't make sense for matvec (and isn't in the base class matvec defn). PiperOrigin-RevId: 258417922
-
由 Andy Ly 提交于
PiperOrigin-RevId: 258413874
-
由 Smit Hinsu 提交于
PiperOrigin-RevId: 258412619
-
由 Akshay Modi 提交于
It would fail on ndarrays with an numpy.dtype doesn't have is_floating check on the next line in any case, so this gives it a nicer error message. Also use the common RegisterType functionality to check for resource variables. PiperOrigin-RevId: 258407797
-