- 13 1月, 2017 40 次提交
-
-
由 A. Unique TensorFlower 提交于
Change: 144395110
-
由 Jonathan Hseu 提交于
Change: 144394762
-
由 Brennan Saeta 提交于
This commit creates a benchmark for the resize_bicubic op on the example sizes used in the inception_train script. The current performance on my desktop is: - bicubic_749_603: 4.98ms/image - bicubic_141_186: 4.79ms/image - bicubic_183_229: 4.80ms/image Change: 144392656
-
由 A. Unique TensorFlower 提交于
Change: 144392019
-
由 Kiril Gorovoy 提交于
Temporary fix of bazel dependencies when including TensorFlow as a submodule. Undo change when this use-case is supported in native.http_archive in Bazel. Change: 144390772
-
由 Eugene Brevdo 提交于
Useful for simplifying a number of flatten -> map -> recreate structure calls. Change: 144388829
-
由 A. Unique TensorFlower 提交于
Change: 144387537
-
由 Andrew Selle 提交于
Change: 144384783
-
由 Gunhan Gulsoy 提交于
Change: 144384102
-
由 Peter Hawkins 提交于
Change: 144384086
-
由 Peter Hawkins 提交于
Add support for marking Xla computations as stateful. Add a store for xla::ChannelHandles in XlaCompiler. Don't mark _Send/_Recv for XLA computation. Change: 144382814
-
由 Andrew Selle 提交于
- Handle more functions: tf.svd tf.batch_matmul tf.nn.softmax_cross_entropy_with_logits, tf.nn.sparse_softmax_cross_entropy_with_logits, tf.nn.sigmoid_cross_entropy_with_logits": [ - Handle in-place file modification correctly (and add test). - Handle raw attribute lookups i.e. lists of functions `foo = [tf.mul]` can be upgraded to `foo = [tf.multiply]` Change: 144381716
-
由 Peter Hawkins 提交于
[TF:XLA] Lower severity of warning if an XLA_GPU device cannot be created to reduce log spam in opensource tests. Change: 144381041
-
由 A. Unique TensorFlower 提交于
Change: 144378661
-
由 A. Unique TensorFlower 提交于
Change: 144376783
-
由 A. Unique TensorFlower 提交于
Change: 144375998
-
由 Martin Wicke 提交于
-
由 David Majnemer 提交于
Change: 144371549
-
由 Jonathan Hseu 提交于
Reformatted core/platform/posix/port.cc as a test. Change: 144368934
-
由 A. Unique TensorFlower 提交于
Change: 144366312
-
由 Peter Hawkins 提交于
Change: 144362931
-
由 A. Unique TensorFlower 提交于
Change: 144360309
-
由 A. Unique TensorFlower 提交于
Change: 144356967
-
由 A. Unique TensorFlower 提交于
Change: 144355590
-
由 Gunhan Gulsoy 提交于
Change: 144354566
-
由 A. Unique TensorFlower 提交于
Change: 144354547
-
由 A. Unique TensorFlower 提交于
Change: 144354294
-
由 A. Unique TensorFlower 提交于
[XLA] Recognize any reduction where dimensions to keep are consecutive in memory as effective reduction to vector Change: 144353864
-
由 Mustafa Ispir 提交于
User code will look like as follows: opt = tf.SyncReplicasOptimizer(...) train_op = opt.minimize(total_loss, global_step=global_step) sync_rep_hook = opt.make_session_run_hook(is_chief) with training.MonitoredTrainingSession(master=master, is_chief=is_chief, hooks=[sync_rep_hook]) as mon_sess: while not mon_sess.should_stop(): mon_sess.run(training_op) Change: 144353039
-
由 A. Unique TensorFlower 提交于
Change: 144352405
-
由 David G. Andersen 提交于
Change: 144351555
-
由 Asim Shankar 提交于
Change the member variable to be of type Classifier interface instead of the implementation (TensorFlowImageClassifier) in the activity class. My intention is to create new Classifier implementations that use the TensorFlow Java API (org.tensorflow.Graph, Session etc.) instead of the Android contrib API (org.tensorflow.contrib.android.TensorFlowInferenceInterface). This re-organization will make the switch between Classifier implementations easier during testing. While at it, some minor cleanups: - Get rid of unnecessary "throws IOException" - Use a factory function instead of an initializer function. Change: 144348772
-
由 David G. Andersen 提交于
Change: 144344623
-
由 A. Unique TensorFlower 提交于
Fix bug in shape inference for set operations. It errantly had been setting first dim of output indices and values to first dim of inputs. Add logic to infer the output rank of DenseToSparse ops from the 2nd (sparse) input arg, even if the 1st (dense) arg's rank is unknown. Replace explicit rank checks with InferenceContext.WithRank* and InferenceContext.WithValue. Change: 144344168
-
由 Patrick Nguyen 提交于
This also exposes meta_Graph_pb2.TensorInfo as tf.TensorInfo. Change: 144344131
-
由 Sergio Guadarrama 提交于
Change: 144342278
-
由 A. Unique TensorFlower 提交于
Change: 144341695
-
由 Shanqing Cai 提交于
The wrapper generates debug-dump directories inside a session_root directory, with each directory corresponding to a Session.run() call. In addition to the usual GraphDef file and dumped Tensor Event files, each run directory contains special files: "_tfdbg_run_fetches_info" and "_tfdbg_run_feed_keys_info", containing a str representation of the fetches and feed_dict.keys() used during the Session.run() call, respectively. The debug-dump directories have the naming pattern of "run_<epoch_timestamp>_<uuid>". Change: 144340734
-
由 Shanqing Cai 提交于
Change: 144340064
-
由 Joshua V. Dillon 提交于
transformation of iid Student's t-distributions and should not be confused with the Multivate Student's T-distribution (which is an example of an https://en.wikipedia.org/wiki/Elliptical_distribution). Change: 144338393
-