- 17 7月, 2019 40 次提交
-
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 258472358
-
由 Blake Hechtman 提交于
PiperOrigin-RevId: 258470701
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258469750
-
由 Guangda Lai 提交于
we can dump it out for TF 2.0. PiperOrigin-RevId: 258468798
-
由 Gaurav Jain 提交于
The device_mgr_ pointer is unused and unnecessarily adds a circular reference. Removing it also leads to good C++ style with: 1) Removal of DeviceMgr as a friend class 2) Removal of forward declaration 3) Removal of CHECK statement PiperOrigin-RevId: 258466138
-
由 Haoliang Zhang 提交于
PiperOrigin-RevId: 258463148
-
由 Reed Wanderman-Milne 提交于
Before, the error message would be something like "LossScaleOptimizer object has no attribute _hyper", even if the accessed attribute was not _hyper. PiperOrigin-RevId: 258462751
-
由 Tong Shen 提交于
PiperOrigin-RevId: 258460978
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 258454960
-
由 Igor Ganichev 提交于
This CL should have no behavior changes. It moves the tensor caches that were fields of eager Context into a global map indexed by context id. This CL is needed so that eager Context does not have a references to EagerTensors. Future changes will add a reference from EagerTensor to eager Context. This will mimick the ownership structure of corresponding C++ objects and ensure that Python will delete the context only after all the tensors have been deleted. The latter will allow us to simplify EagerContext destruction and remove ref counting from it. PiperOrigin-RevId: 258454245
-
由 A. Unique TensorFlower 提交于
Currently, the TFLite flatbuffer exporter only exports the shapes for inputs and constants, which is all that is required by TFLite. However, exporting more information this will improve the ability to round-trip between TFLite and MLIR and could enable the TFLite runtime to infer more information statically. PiperOrigin-RevId: 258452655
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258450422
-
由 Christopher Suter 提交于
(see, e.g., https://stackoverflow.com/questions/4825234/exception-traceback-is-hidden-if-not-re-raised-immediately) PiperOrigin-RevId: 258445844
-
由 Andrew Audibert 提交于
PiperOrigin-RevId: 258445832
-
由 Allen Lavoie 提交于
We weren't properly handling the op-type-is-function-name calling convention. PiperOrigin-RevId: 258441889
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258438818
-
由 Yifei Feng 提交于
PiperOrigin-RevId: 258438229
-
由 Yunxing Dai 提交于
Previously, we only have computation scheduler, which runs heap simulator once per computation. For models with large number of computation, this creates extremly slow compilation time. This cl introduces module scheduler, that only runs heap simulator after the whole module is scheduled. It also contains a helper function that automatically converts a computation scheduler to module scheduler. PiperOrigin-RevId: 258436352
-
由 A. Unique TensorFlower 提交于
Add code that translates between the TFLite and MLIR type systems. This begins the process of building the translator by translating the types of input arguments. The tests are updated to reflect the beginning of the actual work. PiperOrigin-RevId: 258433154
-
由 Pavithra Vijay 提交于
PiperOrigin-RevId: 258432123
-
由 A. Unique TensorFlower 提交于
Add a flatbuffer_importer.cc that registers a translation from TFLite Flatbuffer to MLIR and incorporate it into the flatbuffer_translate tool. The translator does not yet perform any translation, but only validates that the input file contains a FlatBuffer and prints its version number and the names and input tensor IDs of each subgraph. The tests don't actually include the expected correct output, but instead simply make sure that the initial code, which only calls the flatbuffer parser and prints some simple information, functions correctly. PiperOrigin-RevId: 258431902
-
由 Yifei Feng 提交于
In the previous change, we switched from calling importlib on a list of public APIs to using direct import statements. As a result, the list of API was not passed to TFModuleWrapper and attributes start with "_" did not show up in __all__. To fix, we manually include all eligible symbols in __all__ as what was done before. Built and tested with pip package. PiperOrigin-RevId: 258431865
-
由 Benoit Jacob 提交于
Simplify ruy's main loop. Most of the next_* business was unnecessary complication. This code didn't know whether it wanted to hide latency of an atomic increment (60 cycles) or of a block coords computation (comparable). Now it's more intentional about hiding the atomic increment latency, because that's the one instruction here that will always have high latency; for the rest, we can only hope that the compiler will exploit any opportunity to inline the block computation and distribute its instructions so as to hide some of the latency. The more important point is what while we don't really know what would run faster and that will at most make a small impact on latency, on the other hand there is a substantial code simplification here, and this matters because this is very central code. Notice in particular how the block coords computation was written twice, once before the loop body and one at the end of the loop body, and now it's only once anymore. PiperOrigin-RevId: 258429272
-
由 A. Unique TensorFlower 提交于
this is the preparation to merge the internal xprof and external oss version of annotation implementation, I need to make sure the benchmark is comparable or better. PiperOrigin-RevId: 258425457
-
由 Eugene Zhulenev 提交于
PiperOrigin-RevId: 258422018
-
由 Ian Langmore 提交于
this doesn't make sense for matvec (and isn't in the base class matvec defn). PiperOrigin-RevId: 258417922
-
由 Andy Ly 提交于
PiperOrigin-RevId: 258413874
-
由 Smit Hinsu 提交于
PiperOrigin-RevId: 258412619
-
由 Akshay Modi 提交于
It would fail on ndarrays with an numpy.dtype doesn't have is_floating check on the next line in any case, so this gives it a nicer error message. Also use the common RegisterType functionality to check for resource variables. PiperOrigin-RevId: 258407797
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 258404510
-
由 Priya Gupta 提交于
PiperOrigin-RevId: 258399687
-
由 Nupur Garg 提交于
PiperOrigin-RevId: 258395114
-
由 Feng Liu 提交于
This patch contains various changes to allow the worflow working for both UINT8 and INT8 quantization scheme: - The "restricted_output_params" in the "OpQuantSpec" is changed to a map, so the op definition can define restrictions for both UINT8 and INT8. - INT8 quantization spec is added to the TFLite op definition. - a "quantize_sign" flag is passed into the pre-quantize pass, so the spec and propgation for different sign can be configured by this flag. There are followup patches to read this flag from the user's command line. PiperOrigin-RevId: 258393145
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258393073
-
由 TensorFlower Gardener 提交于
PiperOrigin-RevId: 258392805
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258391933
-
由 Alexandre Passos 提交于
PiperOrigin-RevId: 258389587
-
由 Benoit Jacob 提交于
50001. PiperOrigin-RevId: 258389320
-
由 Jiri Simsa 提交于
PiperOrigin-RevId: 258385606
-
由 A. Unique TensorFlower 提交于
PiperOrigin-RevId: 258380926
-