提交 d66aeda6 编写于 作者: K Kanglan Tang 提交者: TensorFlower Gardener

Update release Linux configs in the root bazelrc

The following changes are included:
- Starting from TF2.13v, TensorFlow uses Clang as compiler for Linux. Thus, we update the toolchain in release_cpu_linux and release_gpu_linux configs.
- Preserve the old linux build options in the unsupported_cpu_linux and unsupported_gpu_linux configs. If your project fails to build with Clang, you can use these unsupported flags to replace the release flags in your build command. However, please note that the old toolchain is no longer officially supported by TensorFlow and the unsupported configs will be removed soon. We strongly recommend that you migrate to Clang as your compiler for TensorFlow Linux builds. Instructions are available in the official documentation: https://www.tensorflow.org/install/source#install_clang_recommended_linux_only. Another good alternative is to use our Docker containers to build and test TensorFlow: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/tf_sig_build_dockerfiles.
- Add official linker options and container environment settings to release linux configs.
- Deduplicate build options in cpu.bazelrc and gpu.bazelrc.
- Delete outdated CI jobs.

PiperOrigin-RevId: 564620926
上级 0af4b75f
...@@ -66,12 +66,10 @@ ...@@ -66,12 +66,10 @@
# #
# Release build options (for all operating systems) # Release build options (for all operating systems)
# release_base: Common options for all builds on all operating systems. # release_base: Common options for all builds on all operating systems.
# release_gpu_base: Common options for GPU builds on Linux and Windows.
# release_cpu_linux: Toolchain and CUDA options for Linux CPU builds. # release_cpu_linux: Toolchain and CUDA options for Linux CPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_gpu_linux: Toolchain and CUDA options for Linux GPU builds. # release_gpu_linux: Toolchain and CUDA options for Linux GPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_cpu_windows: Toolchain and CUDA options for Windows CPU builds. # release_cpu_windows: Toolchain and CUDA options for Windows CPU builds.
# release_gpu_windows: Toolchain and CUDA options for Windows GPU builds.
# Default build options. These are applied first and unconditionally. # Default build options. These are applied first and unconditionally.
...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang ...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang
# Select supported compute capabilities (supported graphics cards). # Select supported compute capabilities (supported graphics cards).
# This is the same as the official TensorFlow builds. # This is the same as the official TensorFlow builds.
# See https://developer.nvidia.com/cuda-gpus#compute # See https://developer.nvidia.com/cuda-gpus#compute
# TODO(angerson, perfinion): What does sm_ vs compute_ mean? # TODO(angerson, perfinion): What does sm_ vs compute_ mean? How can users
# TODO(angerson, perfinion): How can users select a good value for this? # select a good value for this? See go/tf-pip-cuda
build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80" build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
# Set up compilation CUDA version and paths and use CI toolchain. # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:cuda_clang_official --config=cuda_clang build:cuda_clang_official --config=cuda_clang
build:cuda_clang_official --action_env=TF_CUDA_VERSION="11" build:cuda_clang_official --action_env=TF_CUDA_VERSION="11"
build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8" build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8"
...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm ...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm
build:rbe_linux --host_linkopt=-lm build:rbe_linux --host_linkopt=-lm
build:rbe_linux_cpu --config=rbe_linux build:rbe_linux_cpu --config=rbe_linux
# Linux cpu and cuda builds now share the same toolchain. # Linux cpu and cuda builds share the same toolchain now.
build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64" build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64"
...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc ...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc
try-import %workspace%/.bazelrc.user try-import %workspace%/.bazelrc.user
# Here are bazelrc configs for release builds # Here are bazelrc configs for release builds
build:release_base --config=v2 # Build TensorFlow v2.
test:release_base --test_size_filters=small,medium test:release_base --test_size_filters=small,medium
build:release_cpu_linux --config=release_base # Target the AVX instruction set
build:release_cpu_linux --config=avx_linux build:release_cpu_linux --config=avx_linux
build:release_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain" # Use the Clang toolchain to compile
build:release_cpu_linux --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
# Disable clang extention that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp.
# See https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
build:release_cpu_linux --copt=-Wno-gnu-offsetof-extensions
# Set lld as the linker.
build:release_cpu_linux --linkopt="-fuse-ld=lld"
build:release_cpu_linux --linkopt="-lm"
# Container environment settings below this point.
# Use Python 3.X as installed in container image
build:release_cpu_linux --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build:release_cpu_linux --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build:release_cpu_linux --python_path="/usr/bin/python3"
# Set Clang as compiler. Use the actual path to clang installed in container.
build:release_cpu_linux --repo_env=CC="/usr/lib/llvm-16/bin/clang"
build:release_cpu_linux --repo_env=BAZEL_COMPILER="/usr/lib/llvm-16/bin/clang"
# Store performance profiling log in the mounted artifact directory.
# The profile can be viewed by visiting chrome://tracing in a Chrome browser.
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
build:release_cpu_linux --profile=/tf/pkg/profile.json.gz
# Test-related settings below this point.
test:release_cpu_linux --build_tests_only --keep_going --test_output=errors --verbose_failures=true
test:release_cpu_linux --local_test_jobs=HOST_CPUS
test:release_cpu_linux --test_env=LD_LIBRARY_PATH test:release_cpu_linux --test_env=LD_LIBRARY_PATH
# Give only the list of failed tests at the end of the log
build:release_cpu_macos --config=release_base test:release_cpu_linux --test_summary=short
build:release_cpu_macos --config=avx_linux
build:release_gpu_base --config=cuda
build:release_gpu_base --action_env=TF_CUDA_VERSION="11"
build:release_gpu_base --action_env=TF_CUDNN_VERSION="8"
build:release_gpu_base --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:release_gpu_linux --config=release_cpu_linux build:release_gpu_linux --config=release_cpu_linux
build:release_gpu_linux --config=release_gpu_base # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:release_gpu_linux --config=tensorrt # Note that linux cpu and cuda builds share the same toolchain now.
build:release_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2" build:release_gpu_linux --config=cuda_clang_official
build:release_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib" test:release_gpu_linux --test_env=LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
build:release_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc" # Local test jobs has to be 4 because parallel_gpu_execute is fragile, I think
build:release_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain test:release_gpu_linux --test_timeout=300,450,1200,3600 --local_test_jobs=4 --run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute
# The old gcc linux build options are preserved in the unsupported_*_linux
# configs. If your project fails to build with Clang, you can use these
# unsupported flags to replace the release flags in your build command.
# However, please note that the old toolchain is no longer officially supported
# by TensorFlow and the unsupported configs will be removed soon b/299962977. We
# strongly recommend that you migrate to Clang as your compiler for TensorFlow
# Linux builds. Instructions are available in the official documentation:
# https://www.tensorflow.org/install/source#install_clang_recommended_linux_only
# Another good option is to use our Docker containers to build and test TF:
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/tf_sig_build_dockerfiles.
build:unsupported_cpu_linux --config=avx_linux
build:unsupported_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain"
test:unsupported_cpu_linux --test_env=LD_LIBRARY_PATH
test:unsupported_cpu_linux --config=release_base
build:unsupported_gpu_linux --config=cuda
build:unsupported_gpu_linux --config=unsupported_cpu_linux
build:unsupported_gpu_linux --action_env=TF_CUDA_VERSION="11"
build:unsupported_gpu_linux --action_env=TF_CUDNN_VERSION="8"
build:unsupported_gpu_linux --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:unsupported_gpu_linux --config=tensorrt
build:unsupported_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
build:unsupported_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib"
build:unsupported_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc"
build:unsupported_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
build:release_cpu_macos --config=avx_linux
test:release_cpu_macos --config=release_base
# TODO(kanglan): Update windows configs after b/289091160 is fixed # TODO(kanglan): Update windows configs after b/289091160 is fixed
build:release_cpu_windows --config=release_base
build:release_cpu_windows --config=avx_win build:release_cpu_windows --config=avx_win
build:release_cpu_windows --define=no_tensorflow_py_deps=true build:release_cpu_windows --define=no_tensorflow_py_deps=true
test:release_cpu_windows --config=release_base
# Exclude TFRT integration for anything but Linux. # Exclude TFRT integration for anything but Linux.
build:android --config=no_tfrt build:android --config=no_tfrt
......
...@@ -135,7 +135,6 @@ tf_staging/tensorflow/tools/ci_build/release/common_win.bat: ...@@ -135,7 +135,6 @@ tf_staging/tensorflow/tools/ci_build/release/common_win.bat:
tf_staging/tensorflow/tools/ci_build/release/mac_build_utils.sh: tf_staging/tensorflow/tools/ci_build/release/mac_build_utils.sh:
tf_staging/tensorflow/tools/ci_build/release/ubuntu_16/custom_op/nightly.sh: tf_staging/tensorflow/tools/ci_build/release/ubuntu_16/custom_op/nightly.sh:
tf_staging/tensorflow/tools/ci_build/release/ubuntu_16/custom_op/release.sh: tf_staging/tensorflow/tools/ci_build/release/ubuntu_16/custom_op/release.sh:
tf_staging/tensorflow/tools/ci_build/release/ubuntu_16/tpu_py37_full/nonpip.sh:
tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py310_full/release_pip_rename.sh: tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py310_full/release_pip_rename.sh:
tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py37_full/release_pip_rename.sh: tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py37_full/release_pip_rename.sh:
tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py38_full/release_pip_rename.sh: tf_staging/tensorflow/tools/ci_build/release/windows/cpu_py38_full/release_pip_rename.sh:
......
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.10
export PYTHON_BIN_PATH=$(which python)
"$PYTHON_BIN_PATH" tensorflow/tools/ci_build/update_version.py --nightly
# Build the pip package
bazel build \
--config=release_cpu_linux \
--action_env=PYTHON_BIN_PATH="$PYTHON_BIN_PATH" \
tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --cpu --nightly_flag
# Upload wheel files
upload_wheel_cpu_ubuntu
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.9
export PYTHON_BIN_PATH=$(which python)
"$PYTHON_BIN_PATH" tensorflow/tools/ci_build/update_version.py --nightly
# Build the pip package
bazel build \
--config=release_cpu_linux \
--action_env=PYTHON_BIN_PATH="$PYTHON_BIN_PATH" \
tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --cpu --nightly_flag
# Upload wheel files
upload_wheel_cpu_ubuntu
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.10
export PYTHON_BIN_PATH=$(which python)
"$PYTHON_BIN_PATH" tensorflow/tools/ci_build/update_version.py --nightly
# Build the pip package
bazel build \
--config=release_gpu_linux \
--action_env=PYTHON_BIN_PATH="$PYTHON_BIN_PATH" \
tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --nightly_flag
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --gpu --nightly_flag
# Upload wheel files
upload_wheel_gpu_ubuntu
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.9
export PYTHON_BIN_PATH=$(which python)
"$PYTHON_BIN_PATH" tensorflow/tools/ci_build/update_version.py --nightly
# Build the pip package
bazel build \
--config=release_gpu_linux \
--action_env=PYTHON_BIN_PATH="$PYTHON_BIN_PATH" \
tensorflow/tools/pip_package:build_pip_package
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --nightly_flag
./bazel-bin/tensorflow/tools/pip_package/build_pip_package pip_pkg --gpu --nightly_flag
# Upload wheel files
upload_wheel_gpu_ubuntu
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
# Update bazel
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.10
tag_filters="-no_oss,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py310,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Run tests
set +e
bazel test \
--config=release_cpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python)" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" \
--test_lang_filters=py \
--test_output=errors \
--local_test_jobs=8 \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.10'
# Setup virtual environment and install dependencies
setup_venv_ubuntu ${TF_PYTHON_VERSION}
export PYTHON_BIN_PATH="$(which ${TF_PYTHON_VERSION})"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=release_cpu_linux"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_excluded,-oss_serial,-no_oss_py310,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
# Update bazel
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.9
tag_filters="-no_oss,-oss_excluded,-oss_serial,-gpu,-tpu,-benchmark-test,-no_oss_py39,-v1only"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Run tests
set +e
bazel test \
--config=release_cpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python)" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" \
--test_lang_filters=py \
--test_output=errors \
--local_test_jobs=8 \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
# Update bazel
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="CPU"
export TF_PYTHON_VERSION='python3.9'
export PYTHON_BIN_PATH="$(which ${TF_PYTHON_VERSION})"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Export optional variables for running pip.sh
export TF_BUILD_FLAGS="--config=release_cpu_linux"
export TF_TEST_FLAGS="--define=no_tensorflow_py_deps=true --test_lang_filters=py --test_output=errors --verbose_failures=true --keep_going --test_env=TF2_BEHAVIOR=1"
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
export TF_TEST_FILTER_TAGS='-no_oss,-oss_excluded,-oss_serial,-no_oss_py39,-v1only'
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow_cpu"
export TF_PIP_TEST_ROOT="pip_test"
./tensorflow/tools/ci_build/builds/pip_new.sh
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.10
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-no_oss_py310,-no_cuda11"
test +e
bazel test \
--config=release_gpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python)" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" \
--test_lang_filters=py \
--test_output=errors --verbose_failures=true --keep_going \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.10'
# Setup virtual environment and install dependencies
setup_venv_ubuntu ${TF_PYTHON_VERSION}
export PYTHON_BIN_PATH="$(which ${TF_PYTHON_VERSION})"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-no_oss_py310,-no_cuda11'
export TF_BUILD_FLAGS="--config=release_gpu_linux "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--action_env=TF_CUDA_VERSION=11.8 --action_env=TF_CUDNN_VERSION=8.6 --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow" # single pip package!
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.8
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-no_oss_py38,-no_cuda11"
test +e
bazel test \
--config=release_gpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python)" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" \
--test_lang_filters=py \
--test_output=errors --verbose_failures=true --keep_going \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Setup virtual environment and install dependencies
setup_venv_ubuntu python3.9
export LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/tensorrt/lib"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
tag_filters="gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-no_oss_py39,-no_cuda11"
test +e
bazel test \
--config=release_gpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python)" \
--build_tag_filters="${tag_filters}" \
--test_tag_filters="${tag_filters}" \
--test_lang_filters=py \
--test_output=errors --verbose_failures=true --keep_going \
--test_timeout="300,450,1200,3600" --local_test_jobs=4 \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute \
-- ${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/...
test_xml_summary_exit
# Remove and cleanup virtual environment
remove_venv_ubuntu
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
install_bazelisk
# Export required variables for running pip.sh
export OS_TYPE="UBUNTU"
export CONTAINER_TYPE="GPU"
export TF_PYTHON_VERSION='python3.9'
export PYTHON_BIN_PATH="$(which ${TF_PYTHON_VERSION})"
# Get the default test targets for bazel.
source tensorflow/tools/ci_build/build_scripts/DEFAULT_TEST_TARGETS.sh
# Export optional variables for running pip.sh
export TF_TEST_FILTER_TAGS='gpu,requires-gpu,-no_gpu,-no_oss,-oss_excluded,-oss_serial,-no_oss_py39,-no_cuda11'
export TF_BUILD_FLAGS="--config=release_gpu_linux "
export TF_TEST_FLAGS="--test_tag_filters=${TF_TEST_FILTER_TAGS} --build_tag_filters=${TF_TEST_FILTER_TAGS} \
--action_env=TF_CUDA_VERSION=11.8 --action_env=TF_CUDNN_VERSION=8.6 --test_env=TF2_BEHAVIOR=1 \
--config=cuda --test_output=errors --local_test_jobs=4 --test_lang_filters=py \
--verbose_failures=true --keep_going --define=no_tensorflow_py_deps=true \
--run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute "
export TF_TEST_TARGETS="${DEFAULT_BAZEL_TARGETS} -//tensorflow/lite/... "
export TF_PIP_TESTS="test_pip_virtualenv_non_clean test_pip_virtualenv_clean"
#export IS_NIGHTLY=0 # Not nightly; uncomment if building from tf repo.
export TF_PROJECT_NAME="tensorflow" # single pip package!
export TF_PIP_TEST_ROOT="pip_test"
# To build both tensorflow and tensorflow-gpu pip packages
export TF_BUILD_BOTH_GPU_PACKAGES=1
./tensorflow/tools/ci_build/builds/pip_new.sh
#!/bin/bash
# Copyright 2019 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -e
set -x
source tensorflow/tools/ci_build/release/common.sh
# For some lagacy reasons, it is hard to change the name of the test to
# `tpu_py3_10_full` though we are using python 3.10.
install_ubuntu_16_python_pip_deps python3.10
pip3.10 install --user --upgrade --ignore-installed cloud-tpu-client
install_bazelisk
test_patterns=(//tensorflow/... -//tensorflow/compiler/... -//tensorflow/lite/...)
tag_filters="tpu,-tpu_pod,-no_tpu,-notpu,-no_oss,-oss_excluded,-no_oss_py37"
bazel_args=(
--config=release_cpu_linux \
--repo_env=PYTHON_BIN_PATH="$(which python3.10)" \
--build_tag_filters="${tag_filters}" \
--test_sharding_strategy=disabled \
--test_tag_filters="${tag_filters}" \
--test_output=errors --verbose_failures=true --keep_going \
--build_tests_only
)
bazel build "${bazel_args[@]}" -- "${test_patterns[@]}"
TPU_NAME="kokoro-tpu-2vm-${RANDOM}"
TPU_PROJECT="tensorflow-testing-tpu"
TPU_ZONES="us-central1-b:v2-8 us-central1-c:v2-8 us-central1-b:v3-8 us-central1-a:v3-8"
RETRY_COUNTER=0
RETRY_LIMIT=60 # minutes
while [[ ! "${TPU_CREATED}" == "true" ]]; do
for TPU_ZONE_WITH_TYPE in $TPU_ZONES; do
TPU_ZONE="$(echo "${TPU_ZONE_WITH_TYPE}" | cut -d : -f 1)"
TPU_TYPE="$(echo "${TPU_ZONE_WITH_TYPE}" | cut -d : -f 2)"
if gcloud compute tpus create "$TPU_NAME" \
--zone="${TPU_ZONE}" \
--accelerator-type="${TPU_TYPE}" \
--version=nightly; then
TPU_CREATED="true"
break
fi
done
# retry for $RETRY_LIMIT minutes if resources are not available
if [[ ! "${TPU_CREATED}" == "true" ]]; then
RETRY_COUNTER="$(( RETRY_COUNTER+1 ))"
if (( RETRY_COUNTER == RETRY_LIMIT )); then
echo "Retry limit exceeded. Retry count: $RETRY_COUNTER"
exit 1
fi
echo "Retry number $RETRY_COUNTER"
sleep 1m
fi
done
echo "Total retry time: $RETRY_COUNTER minutes."
# Clean up script uses these files.
echo "${TPU_NAME}" > "${KOKORO_ARTIFACTS_DIR}/tpu_name"
echo "${TPU_ZONE}" > "${KOKORO_ARTIFACTS_DIR}/tpu_zone"
echo "${TPU_PROJECT}" > "${KOKORO_ARTIFACTS_DIR}/tpu_project"
test_args=(
--test_timeout=120,600,-1,-1 \
--test_arg=--tpu="${TPU_NAME}" \
--test_arg=--zone="${TPU_ZONE}" \
--test_arg=--test_dir_base=gs://kokoro-tpu-testing/tempdir/ \
--local_test_jobs=1
)
set +e
bazel test "${bazel_args[@]}" "${test_args[@]}" -- "${test_patterns[@]}"
test_xml_summary_exit
...@@ -12,42 +12,8 @@ build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/ ...@@ -12,42 +12,8 @@ build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/
# the CACHEBUSTER to the PR number. # the CACHEBUSTER to the PR number.
build --action_env=CACHEBUSTER=501872366 build --action_env=CACHEBUSTER=501872366
# Use Python 3.X as installed in container image # Build options for CPU Linux
build --action_env PYTHON_BIN_PATH="/usr/bin/python3" build --config=release_cpu_linux
build --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build --python_path="/usr/bin/python3"
# Build TensorFlow v2
build --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
# Target the AVX instruction set
build --copt=-mavx --host_copt=-mavx
# Use lld as the linker
build --linkopt="-fuse-ld=lld"
build --linkopt="-lm"
# Disable clang extention that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp.
# See https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
build --copt=-Wno-gnu-offsetof-extensions
# Store performance profiling log in the mounted artifact directory.
# The profile can be viewed by visiting chrome://tracing in a Chrome browser.
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
build --profile=/tf/pkg/profile.json.gz
# Use the Clang toolchain to compile for manylinux2014
build --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
# Test-related settings below this point.
test --build_tests_only --keep_going --test_output=errors --verbose_failures=true
test --local_test_jobs=HOST_CPUS
test --test_env=LD_LIBRARY_PATH
# Give only the list of failed tests at the end of the log
test --test_summary=short
# "nonpip" tests are regular py_test tests. # "nonpip" tests are regular py_test tests.
# Pass --config=nonpip to run the same suite of tests. If you want to run just # Pass --config=nonpip to run the same suite of tests. If you want to run just
......
...@@ -12,43 +12,8 @@ build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/ ...@@ -12,43 +12,8 @@ build:sigbuild_remote_cache_push --remote_cache="https://storage.googleapis.com/
# the CACHEBUSTER to the PR number. # the CACHEBUSTER to the PR number.
build --action_env=CACHEBUSTER=501872366 build --action_env=CACHEBUSTER=501872366
# Use Python 3.X as installed in container image # Build options for GPU Linux
build --action_env PYTHON_BIN_PATH="/usr/bin/python3" build --config=release_gpu_linux
build --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build --python_path="/usr/bin/python3"
# Build TensorFlow v2
build --define=tf_api_version=2 --action_env=TF2_BEHAVIOR=1
# Target the AVX instruction set
build --copt=-mavx --host_copt=-mavx
# Disable clang extention that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp.
# See https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
build --copt=-Wno-gnu-offsetof-extensions
# Use lld as the linker
build --linkopt="-fuse-ld=lld"
build --linkopt="-lm"
# Store performance profiling log in the mounted artifact directory.
# The profile can be viewed by visiting chrome://tracing in a Chrome browser.
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
build --profile=/tf/pkg/profile.json.gz
# CUDA: Set up compilation CUDA version and paths
build --config=cuda_clang_official
# Test-related settings below this point.
test --build_tests_only --keep_going --test_output=errors --verbose_failures=true
test --test_env=LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
# Local test jobs has to be 4 because parallel_gpu_execute is fragile, I think
test --test_timeout=300,450,1200,3600 --local_test_jobs=4 --run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute
# Give only the list of failed tests at the end of the log
test --test_summary=short
# "nonpip" tests are regular py_test tests. # "nonpip" tests are regular py_test tests.
# Pass --config=nonpip to run the same suite of tests. If you want to run just # Pass --config=nonpip to run the same suite of tests. If you want to run just
......
...@@ -66,12 +66,10 @@ ...@@ -66,12 +66,10 @@
# #
# Release build options (for all operating systems) # Release build options (for all operating systems)
# release_base: Common options for all builds on all operating systems. # release_base: Common options for all builds on all operating systems.
# release_gpu_base: Common options for GPU builds on Linux and Windows.
# release_cpu_linux: Toolchain and CUDA options for Linux CPU builds. # release_cpu_linux: Toolchain and CUDA options for Linux CPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_gpu_linux: Toolchain and CUDA options for Linux GPU builds. # release_gpu_linux: Toolchain and CUDA options for Linux GPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_cpu_windows: Toolchain and CUDA options for Windows CPU builds. # release_cpu_windows: Toolchain and CUDA options for Windows CPU builds.
# release_gpu_windows: Toolchain and CUDA options for Windows GPU builds.
# Default build options. These are applied first and unconditionally. # Default build options. These are applied first and unconditionally.
...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang ...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang
# Select supported compute capabilities (supported graphics cards). # Select supported compute capabilities (supported graphics cards).
# This is the same as the official TensorFlow builds. # This is the same as the official TensorFlow builds.
# See https://developer.nvidia.com/cuda-gpus#compute # See https://developer.nvidia.com/cuda-gpus#compute
# TODO(angerson, perfinion): What does sm_ vs compute_ mean? # TODO(angerson, perfinion): What does sm_ vs compute_ mean? How can users
# TODO(angerson, perfinion): How can users select a good value for this? # select a good value for this? See go/tf-pip-cuda
build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80" build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
# Set up compilation CUDA version and paths and use CI toolchain. # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:cuda_clang_official --config=cuda_clang build:cuda_clang_official --config=cuda_clang
build:cuda_clang_official --action_env=TF_CUDA_VERSION="11" build:cuda_clang_official --action_env=TF_CUDA_VERSION="11"
build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8" build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8"
...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm ...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm
build:rbe_linux --host_linkopt=-lm build:rbe_linux --host_linkopt=-lm
build:rbe_linux_cpu --config=rbe_linux build:rbe_linux_cpu --config=rbe_linux
# Linux cpu and cuda builds now share the same toolchain. # Linux cpu and cuda builds share the same toolchain now.
build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64" build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64"
...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc ...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc
try-import %workspace%/.bazelrc.user try-import %workspace%/.bazelrc.user
# Here are bazelrc configs for release builds # Here are bazelrc configs for release builds
build:release_base --config=v2 # Build TensorFlow v2.
test:release_base --test_size_filters=small,medium test:release_base --test_size_filters=small,medium
build:release_cpu_linux --config=release_base # Target the AVX instruction set
build:release_cpu_linux --config=avx_linux build:release_cpu_linux --config=avx_linux
build:release_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain" # Use the Clang toolchain to compile
build:release_cpu_linux --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
# Disable clang extention that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp.
# See https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
build:release_cpu_linux --copt=-Wno-gnu-offsetof-extensions
# Set lld as the linker.
build:release_cpu_linux --linkopt="-fuse-ld=lld"
build:release_cpu_linux --linkopt="-lm"
# Container environment settings below this point.
# Use Python 3.X as installed in container image
build:release_cpu_linux --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build:release_cpu_linux --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build:release_cpu_linux --python_path="/usr/bin/python3"
# Set Clang as compiler. Use the actual path to clang installed in container.
build:release_cpu_linux --repo_env=CC="/usr/lib/llvm-16/bin/clang"
build:release_cpu_linux --repo_env=BAZEL_COMPILER="/usr/lib/llvm-16/bin/clang"
# Store performance profiling log in the mounted artifact directory.
# The profile can be viewed by visiting chrome://tracing in a Chrome browser.
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
build:release_cpu_linux --profile=/tf/pkg/profile.json.gz
# Test-related settings below this point.
test:release_cpu_linux --build_tests_only --keep_going --test_output=errors --verbose_failures=true
test:release_cpu_linux --local_test_jobs=HOST_CPUS
test:release_cpu_linux --test_env=LD_LIBRARY_PATH test:release_cpu_linux --test_env=LD_LIBRARY_PATH
# Give only the list of failed tests at the end of the log
build:release_cpu_macos --config=release_base test:release_cpu_linux --test_summary=short
build:release_cpu_macos --config=avx_linux
build:release_gpu_base --config=cuda
build:release_gpu_base --action_env=TF_CUDA_VERSION="11"
build:release_gpu_base --action_env=TF_CUDNN_VERSION="8"
build:release_gpu_base --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:release_gpu_linux --config=release_cpu_linux build:release_gpu_linux --config=release_cpu_linux
build:release_gpu_linux --config=release_gpu_base # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:release_gpu_linux --config=tensorrt # Note that linux cpu and cuda builds share the same toolchain now.
build:release_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2" build:release_gpu_linux --config=cuda_clang_official
build:release_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib" test:release_gpu_linux --test_env=LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
build:release_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc" # Local test jobs has to be 4 because parallel_gpu_execute is fragile, I think
build:release_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain test:release_gpu_linux --test_timeout=300,450,1200,3600 --local_test_jobs=4 --run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute
# The old gcc linux build options are preserved in the unsupported_*_linux
# configs. If your project fails to build with Clang, you can use these
# unsupported flags to replace the release flags in your build command.
# However, please note that the old toolchain is no longer officially supported
# by TensorFlow and the unsupported configs will be removed soon b/299962977. We
# strongly recommend that you migrate to Clang as your compiler for TensorFlow
# Linux builds. Instructions are available in the official documentation:
# https://www.tensorflow.org/install/source#install_clang_recommended_linux_only
# Another good option is to use our Docker containers to build and test TF:
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/tf_sig_build_dockerfiles.
build:unsupported_cpu_linux --config=avx_linux
build:unsupported_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain"
test:unsupported_cpu_linux --test_env=LD_LIBRARY_PATH
test:unsupported_cpu_linux --config=release_base
build:unsupported_gpu_linux --config=cuda
build:unsupported_gpu_linux --config=unsupported_cpu_linux
build:unsupported_gpu_linux --action_env=TF_CUDA_VERSION="11"
build:unsupported_gpu_linux --action_env=TF_CUDNN_VERSION="8"
build:unsupported_gpu_linux --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:unsupported_gpu_linux --config=tensorrt
build:unsupported_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
build:unsupported_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib"
build:unsupported_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc"
build:unsupported_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
build:release_cpu_macos --config=avx_linux
test:release_cpu_macos --config=release_base
# TODO(kanglan): Update windows configs after b/289091160 is fixed # TODO(kanglan): Update windows configs after b/289091160 is fixed
build:release_cpu_windows --config=release_base
build:release_cpu_windows --config=avx_win build:release_cpu_windows --config=avx_win
build:release_cpu_windows --define=no_tensorflow_py_deps=true build:release_cpu_windows --define=no_tensorflow_py_deps=true
test:release_cpu_windows --config=release_base
# Exclude TFRT integration for anything but Linux. # Exclude TFRT integration for anything but Linux.
build:android --config=no_tfrt build:android --config=no_tfrt
......
...@@ -66,12 +66,10 @@ ...@@ -66,12 +66,10 @@
# #
# Release build options (for all operating systems) # Release build options (for all operating systems)
# release_base: Common options for all builds on all operating systems. # release_base: Common options for all builds on all operating systems.
# release_gpu_base: Common options for GPU builds on Linux and Windows.
# release_cpu_linux: Toolchain and CUDA options for Linux CPU builds. # release_cpu_linux: Toolchain and CUDA options for Linux CPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_gpu_linux: Toolchain and CUDA options for Linux GPU builds. # release_gpu_linux: Toolchain and CUDA options for Linux GPU builds.
# release_cpu_macos: Toolchain and CUDA options for MacOS CPU builds.
# release_cpu_windows: Toolchain and CUDA options for Windows CPU builds. # release_cpu_windows: Toolchain and CUDA options for Windows CPU builds.
# release_gpu_windows: Toolchain and CUDA options for Windows GPU builds.
# Default build options. These are applied first and unconditionally. # Default build options. These are applied first and unconditionally.
...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang ...@@ -238,11 +236,11 @@ build:cuda_clang --@local_config_cuda//:cuda_compiler=clang
# Select supported compute capabilities (supported graphics cards). # Select supported compute capabilities (supported graphics cards).
# This is the same as the official TensorFlow builds. # This is the same as the official TensorFlow builds.
# See https://developer.nvidia.com/cuda-gpus#compute # See https://developer.nvidia.com/cuda-gpus#compute
# TODO(angerson, perfinion): What does sm_ vs compute_ mean? # TODO(angerson, perfinion): What does sm_ vs compute_ mean? How can users
# TODO(angerson, perfinion): How can users select a good value for this? # select a good value for this? See go/tf-pip-cuda
build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80" build:cuda_clang --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
# Set up compilation CUDA version and paths and use CI toolchain. # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:cuda_clang_official --config=cuda_clang build:cuda_clang_official --config=cuda_clang
build:cuda_clang_official --action_env=TF_CUDA_VERSION="11" build:cuda_clang_official --action_env=TF_CUDA_VERSION="11"
build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8" build:cuda_clang_official --action_env=TF_CUDNN_VERSION="8"
...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm ...@@ -455,7 +453,7 @@ build:rbe_linux --linkopt=-lm
build:rbe_linux --host_linkopt=-lm build:rbe_linux --host_linkopt=-lm
build:rbe_linux_cpu --config=rbe_linux build:rbe_linux_cpu --config=rbe_linux
# Linux cpu and cuda builds now share the same toolchain. # Linux cpu and cuda builds share the same toolchain now.
build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --host_crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain" build:rbe_linux_cpu --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64" build:rbe_linux_cpu --extra_toolchains="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain-linux-x86_64"
...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc ...@@ -544,34 +542,82 @@ try-import %workspace%/.tf_configure.bazelrc
try-import %workspace%/.bazelrc.user try-import %workspace%/.bazelrc.user
# Here are bazelrc configs for release builds # Here are bazelrc configs for release builds
build:release_base --config=v2 # Build TensorFlow v2.
test:release_base --test_size_filters=small,medium test:release_base --test_size_filters=small,medium
build:release_cpu_linux --config=release_base # Target the AVX instruction set
build:release_cpu_linux --config=avx_linux build:release_cpu_linux --config=avx_linux
build:release_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain" # Use the Clang toolchain to compile
build:release_cpu_linux --crosstool_top="@sigbuild-r2.14-clang_config_cuda//crosstool:toolchain"
# Disable clang extention that rejects type definitions within offsetof.
# This was added in clang-16 by https://reviews.llvm.org/D133574.
# Can be removed once upb is updated, since a type definition is used within
# offset of in the current version of ubp.
# See https://github.com/protocolbuffers/upb/blob/9effcbcb27f0a665f9f345030188c0b291e32482/upb/upb.c#L183.
build:release_cpu_linux --copt=-Wno-gnu-offsetof-extensions
# Set lld as the linker.
build:release_cpu_linux --linkopt="-fuse-ld=lld"
build:release_cpu_linux --linkopt="-lm"
# Container environment settings below this point.
# Use Python 3.X as installed in container image
build:release_cpu_linux --action_env PYTHON_BIN_PATH="/usr/bin/python3"
build:release_cpu_linux --action_env PYTHON_LIB_PATH="/usr/lib/tf_python"
build:release_cpu_linux --python_path="/usr/bin/python3"
# Set Clang as compiler. Use the actual path to clang installed in container.
build:release_cpu_linux --repo_env=CC="/usr/lib/llvm-16/bin/clang"
build:release_cpu_linux --repo_env=BAZEL_COMPILER="/usr/lib/llvm-16/bin/clang"
# Store performance profiling log in the mounted artifact directory.
# The profile can be viewed by visiting chrome://tracing in a Chrome browser.
# See https://docs.bazel.build/versions/main/skylark/performance.html#performance-profiling
build:release_cpu_linux --profile=/tf/pkg/profile.json.gz
# Test-related settings below this point.
test:release_cpu_linux --build_tests_only --keep_going --test_output=errors --verbose_failures=true
test:release_cpu_linux --local_test_jobs=HOST_CPUS
test:release_cpu_linux --test_env=LD_LIBRARY_PATH test:release_cpu_linux --test_env=LD_LIBRARY_PATH
# Give only the list of failed tests at the end of the log
build:release_cpu_macos --config=release_base test:release_cpu_linux --test_summary=short
build:release_cpu_macos --config=avx_linux
build:release_gpu_base --config=cuda
build:release_gpu_base --action_env=TF_CUDA_VERSION="11"
build:release_gpu_base --action_env=TF_CUDNN_VERSION="8"
build:release_gpu_base --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:release_gpu_linux --config=release_cpu_linux build:release_gpu_linux --config=release_cpu_linux
build:release_gpu_linux --config=release_gpu_base # Set up compilation CUDA version and paths and use the CUDA Clang toolchain.
build:release_gpu_linux --config=tensorrt # Note that linux cpu and cuda builds share the same toolchain now.
build:release_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2" build:release_gpu_linux --config=cuda_clang_official
build:release_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib" test:release_gpu_linux --test_env=LD_LIBRARY_PATH="/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64"
build:release_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc" # Local test jobs has to be 4 because parallel_gpu_execute is fragile, I think
build:release_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain test:release_gpu_linux --test_timeout=300,450,1200,3600 --local_test_jobs=4 --run_under=//tensorflow/tools/ci_build/gpu_build:parallel_gpu_execute
# The old gcc linux build options are preserved in the unsupported_*_linux
# configs. If your project fails to build with Clang, you can use these
# unsupported flags to replace the release flags in your build command.
# However, please note that the old toolchain is no longer officially supported
# by TensorFlow and the unsupported configs will be removed soon b/299962977. We
# strongly recommend that you migrate to Clang as your compiler for TensorFlow
# Linux builds. Instructions are available in the official documentation:
# https://www.tensorflow.org/install/source#install_clang_recommended_linux_only
# Another good option is to use our Docker containers to build and test TF:
# https://github.com/tensorflow/tensorflow/tree/master/tensorflow/tools/tf_sig_build_dockerfiles.
build:unsupported_cpu_linux --config=avx_linux
build:unsupported_cpu_linux --crosstool_top="@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain"
test:unsupported_cpu_linux --test_env=LD_LIBRARY_PATH
test:unsupported_cpu_linux --config=release_base
build:unsupported_gpu_linux --config=cuda
build:unsupported_gpu_linux --config=unsupported_cpu_linux
build:unsupported_gpu_linux --action_env=TF_CUDA_VERSION="11"
build:unsupported_gpu_linux --action_env=TF_CUDNN_VERSION="8"
build:unsupported_gpu_linux --repo_env=TF_CUDA_COMPUTE_CAPABILITIES="sm_35,sm_50,sm_60,sm_70,sm_75,compute_80"
build:unsupported_gpu_linux --config=tensorrt
build:unsupported_gpu_linux --action_env=CUDA_TOOLKIT_PATH="/usr/local/cuda-11.2"
build:unsupported_gpu_linux --action_env=LD_LIBRARY_PATH="/usr/local/cuda:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:/usr/local/cuda-11.1/lib64:/usr/local/tensorrt/lib"
build:unsupported_gpu_linux --action_env=GCC_HOST_COMPILER_PATH="/dt9/usr/bin/gcc"
build:unsupported_gpu_linux --crosstool_top=@ubuntu20.04-gcc9_manylinux2014-cuda11.2-cudnn8.1-tensorrt7.2_config_cuda//crosstool:toolchain
build:release_cpu_macos --config=avx_linux
test:release_cpu_macos --config=release_base
# TODO(kanglan): Update windows configs after b/289091160 is fixed # TODO(kanglan): Update windows configs after b/289091160 is fixed
build:release_cpu_windows --config=release_base
build:release_cpu_windows --config=avx_win build:release_cpu_windows --config=avx_win
build:release_cpu_windows --define=no_tensorflow_py_deps=true build:release_cpu_windows --define=no_tensorflow_py_deps=true
test:release_cpu_windows --config=release_base
# Exclude TFRT integration for anything but Linux. # Exclude TFRT integration for anything but Linux.
build:android --config=no_tfrt build:android --config=no_tfrt
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册