Non training Batch Norm operator has bad performance for it running into...
Non training Batch Norm operator has bad performance for it running into tensorflow's non fused batch norm API (#10207) * When use tensorflow as backend, let batch norm run into fused batch norm as much as possible, which has better performance. fix issue: http://github.com/keras-team/keras/issues/10058 * In Tensorflow backend, let batch norm call to FusedBatchNorm only NHWC format, also gamma and beta are not None. Test result: test env: with Tensorflow(commit a543d9471047ca3f6881c87105fcbe2cdff9207d Date: Thu May 10 17:43:30 2018, local build), python3.4, centos7.4 test cases: "pytest ./tests/keras/layers/normalization_test.py" <all passed> "pytest ./tests" <keep same result as without this commit's modification on BN> * fix code sytle. * 1. Add axis parameter in backend's batch_normalization functions. 2. Refine the batch_normalization function in tensorflow backend, Let's it call to fused batch norm as much as possible. Thanks the coments from fchollet. * Trigger * 1. add default value -1 for parameter axis in batch_normalization function in backend. 2. fix some code style. Thanks the comments from fchollet.
Showing
想要评论请 注册 或 登录