1. 09 7月, 2018 3 次提交
  2. 03 7月, 2018 1 次提交
  3. 10 6月, 2018 1 次提交
  4. 08 6月, 2018 1 次提交
  5. 24 5月, 2018 2 次提交
  6. 14 5月, 2018 1 次提交
  7. 10 5月, 2018 1 次提交
    • A
      ci: Compile LLVM with Clang 6.0.0 · 7e5b9ac4
      Alex Crichton 提交于
      Currently on CI we predominately compile LLVM with the default system compiler
      which means gcc on Linux, some version of Clang on OSX, MSVC on Windows, and
      gcc on MinGW. This commit switches Linux, OSX, and Windows to all use Clang
      6.0.0 to build LLVM (aka the C/C++ compiler as part of the bootstrap). This
      looks to generate faster code according to #49879 which translates to a faster
      rustc (as LLVM internally is faster)
      
      The major changes here were to the containers that build Linux releases,
      basically adding a new step that uses the previous gcc 4.8 compiler to compile
      the next Clang 6.0.0 compiler. Otherwise the OSX and Windows scripts have been
      updated to download precompiled versions of Clang 6 and configure the build to
      use them.
      
      Note that `cc` was updated here to fix using `clang-cl` with `cc-rs` on MSVC, as
      well as an update to `sccache` on Windows which was needed to correctly work
      with `clang-cl`. Finally the MinGW compiler is entirely left out here
      intentionally as it's currently thought that Clang can't generate C++ code for
      MinGW and we need to use gcc, but this should be verified eventually.
      7e5b9ac4
  8. 13 4月, 2018 1 次提交
  9. 06 4月, 2018 1 次提交
    • K
      Give a name to every CI job. · 649f431a
      kennytm 提交于
      Bots that read the log can simply look for `[CI_JOB_NAME=...]` to find out
      the job's name.
      649f431a
  10. 05 4月, 2018 1 次提交
  11. 03 4月, 2018 2 次提交
  12. 02 4月, 2018 1 次提交
  13. 23 3月, 2018 1 次提交
    • A
      ci: Don't use Travis caches for docker images · a09e9e9a
      Alex Crichton 提交于
      This commit moves away from caching on Travis to our own caching on S3 for
      caching docker layers between builds. Unfortunately the Travis caches have over
      time had a few critical pain points:
      
      * Caches are only updated for successful builds, meaning that if a build times
        out or fails in a different location the sucessfully-created docker images
        isn't always cached. While this makes sense as a general rule of caches it
        hurts our use cases.
      
      * Caches are per-branch and builder which means that we don't have a separate
        cache on each release channel. All our merges go through the `auto` branch
        which means that they're all sharing the same cache, even those for merging to
        master/beta. This means that PRs which switch between master/beta will keep
        rebuilting and having cache misses.
      
      * Caches have historically been invaliated somewhat regularly a little more
        aggressively than we'd want (I think).
      
      * We don't always need to update the contents of the cache if the Docker image
        didn't change at all, and saving off the docker layers can sometimes be quite
        expensive.
      
      For all these reasons this commit drops the usage of Travis's built-in caching
      support. Instead our own caching is used by storing blobs to S3. Normally this
      would be a very risky endeavour but we're basically priming a cache for a cache
      (docker) so if we get this wrong the failure mode is longer builds, not stale
      caches. We'll notice that pretty quickly and hopefully fix it!
      
      The logic here is inserted directly into the `src/ci/docker/run.sh` script to
      download an image based on a shasum of the `Dockerfile` and other assorted files.
      This blob, if found, is loaded into docker and we record what layers were
      inserted. After docker finishes the build (hopefully quickly with lots of cache
      hits) we then see the sha of the final image. If it's one of the layers we
      loaded then there's no need to update the cache. Otherwise we upload our layers
      to the global cache, possibly overwriting what we previously just downloaded.
      
      This is hopefully a step towards mitigating #49278 although it doesn't
      completely fix it as it means we'll still probably have to retry builds that
      bust the cache.
      a09e9e9a
  14. 21 3月, 2018 2 次提交
  15. 10 3月, 2018 2 次提交
    • K
      b8cd6e5d
    • A
      travis: Upgrade dist builders for OSX · d65dfd13
      Alex Crichton 提交于
      This commit upgrades the dist builders for OSX to Travis's new `xcode9.3-moar`
      image which has 3 cores available to it instead of 2. This should help us
      provide speedier builds on OSX and hit timeouts less in theory!
      
      Note that historically the dist builders for OSX have been a different version
      than the ones that are running tests. I had forgotten why this was the case and
      digging around brought up 30761556 where apparently Xcode 8 wasn't able to
      compile LLVM with `MACOSX_DEPLOYMENT_TARGET=10.7` which we desired. On a whim I
      gave this PR a spin and it [looks like][green] this has since been fixed (maybe
      in LLVM?). In any case those green builds should hopefully mean that we can
      safely upgrade and get faster infrastructure to boot.
      
      This commit also includes an upgrade of OpenSSL. This is not done for security
      reasons but rather build system reasons. Originally builds with the new image
      [did not succeed][red] due to weird build failures in OpenSSL, but upgrading
      seems to have made the spurious errors go away to here's to also hoping that's
      fixed!
      
      [green]: https://travis-ci.org/rust-lang/rust/builds/351353412
      [red]: https://travis-ci.org/rust-lang/rust/builds/350969248
      d65dfd13
  16. 08 3月, 2018 1 次提交
    • A
      travis: Upgrade OSX builders · 55a2fdf2
      Alex Crichton 提交于
      This upgrades the OSX builders to the `xcode9.3-moar` image which has 3 cores as
      opposed to the 2 that our builders currently have. Should help make those OSX
      builds a bit speedier!
      55a2fdf2
  17. 04 3月, 2018 1 次提交
    • A
      rust: Import LLD for linking wasm objects · d69b2480
      Alex Crichton 提交于
      This commit imports the LLD project from LLVM to serve as the default linker for
      the `wasm32-unknown-unknown` target. The `binaryen` submoule is consequently
      removed along with "binaryen linker" support in rustc.
      
      Moving to LLD brings with it a number of benefits for wasm code:
      
      * LLD is itself an actual linker, so there's no need to compile all wasm code
        with LTO any more. As a result builds should be *much* speedier as LTO is no
        longer forcibly enabled for all builds of the wasm target.
      * LLD is quickly becoming an "official solution" for linking wasm code together.
        This, I believe at least, is intended to be the main supported linker for
        native code and wasm moving forward. Picking up support early on should help
        ensure that we can help LLD identify bugs and otherwise prove that it works
        great for all our use cases!
      * Improvements to the wasm toolchain are currently primarily focused around LLVM
        and LLD (from what I can tell at least), so it's in general much better to be
        on this bandwagon for bugfixes and new features.
      * Historical "hacks" like `wasm-gc` will soon no longer be necessary, LLD
        will [natively implement][gc] `--gc-sections` (better than `wasm-gc`!) which
        means a postprocessor is no longer needed to show off Rust's "small wasm
        binary size".
      
      LLD is added in a pretty standard way to rustc right now. A new rustbuild target
      was defined for building LLD, and this is executed when a compiler's sysroot is
      being assembled. LLD is compiled against the LLVM that we've got in tree, which
      means we're currently on the `release_60` branch, but this may get upgraded in
      the near future!
      
      LLD is placed into rustc's sysroot in a `bin` directory. This is similar to
      where `gcc.exe` can be found on Windows. This directory is automatically added
      to `PATH` whenever rustc executes the linker, allowing us to define a `WasmLd`
      linker which implements the interface that `wasm-ld`, LLD's frontend, expects.
      
      Like Emscripten the LLD target is currently only enabled for Tier 1 platforms,
      notably OSX/Windows/Linux, and will need to be installed manually for compiling
      to wasm on other platforms. LLD is by default turned off in rustbuild, and
      requires a `config.toml` option to be enabled to turn it on.
      
      Finally the unstable `#![wasm_import_memory]` attribute was also removed as LLD
      has a native option for controlling this.
      
      [gc]: https://reviews.llvm.org/D42511
      d69b2480
  18. 23 2月, 2018 1 次提交
  19. 11 2月, 2018 1 次提交
  20. 29 1月, 2018 1 次提交
    • A
      rustc: Split Emscripten to a separate codegen backend · c6daea7c
      Alex Crichton 提交于
      This commit introduces a separately compiled backend for Emscripten, avoiding
      compiling the `JSBackend` target in the main LLVM codegen backend. This builds
      on the foundation provided by #47671 to create a new codegen backend dedicated
      solely to Emscripten, removing the `JSBackend` of the main codegen backend in
      the process.
      
      A new field was added to each target for this commit which specifies the backend
      to use for translation, the default being `llvm` which is the main backend that
      we use. The Emscripten targets specify an `emscripten` backend instead of the
      main `llvm` one.
      
      There's a whole bunch of consequences of this change, but I'll try to enumerate
      them here:
      
      * A *second* LLVM submodule was added in this commit. The main LLVM submodule
        will soon start to drift from the Emscripten submodule, but currently they're
        both at the same revision.
      * Logic was added to rustbuild to *not* build the Emscripten backend by default.
        This is gated behind a `--enable-emscripten` flag to the configure script. By
        default users should neither check out the emscripten submodule nor compile
        it.
      * The `init_repo.sh` script was updated to fetch the Emscripten submodule from
        GitHub the same way we do the main LLVM submodule (a tarball fetch).
      * The Emscripten backend, turned off by default, is still turned on for a number
        of targets on CI. We'll only be shipping an Emscripten backend with Tier 1
        platforms, though. All cross-compiled platforms will not be receiving an
        Emscripten backend yet.
      
      This commit means that when you download the `rustc` package in Rustup for Tier
      1 platforms you'll be receiving two trans backends, one for Emscripten and one
      that's the general LLVM backend. If you never compile for Emscripten you'll
      never use the Emscripten backend, so we may update this one day to only download
      the Emscripten backend when you add the Emscripten target. For now though it's
      just an extra 10MB gzip'd.
      
      Closes #46819
      c6daea7c
  21. 17 1月, 2018 2 次提交
    • E
      Move dist-cloudabi/ into disabled/. · 5801f95c
      Ed Schouten 提交于
      There is not enough capacity to do automated builds for CloudABI at this
      time.
      5801f95c
    • E
      Add a Docker container for doing automated builds for CloudABI. · 24c3a6db
      Ed Schouten 提交于
      Setting up a cross compilation toolchain for CloudABI is relatively
      easy. It's just a matter of installing a somewhat recent version of
      Clang (5.0 preferred) and installing the corresponding
      ${target}-cxx-runtime package, containing a set of core C/C++ libraries
      (libc, libc++, libunwind, etc).
      
      Eventually it would be nice if we could also run 'x.py test'. That,
      however still requires some more work. Both libtest and compiletest
      would need to be adjusted to deal with CloudABI's requirement of having
      all of an application's dependencies injected. Let's settle for just
      doing 'x.py dist' for now.
      24c3a6db
  22. 12 1月, 2018 1 次提交
  23. 30 12月, 2017 1 次提交
  24. 27 12月, 2017 2 次提交
  25. 26 12月, 2017 1 次提交
    • K
      Follow up to #46924 · 472a3c10
      kennytm 提交于
      It seems using `fe80::/64` causes `docker start` to fail with "Address
      already in use". Try to change to a unique local address range instead.
      472a3c10
  26. 23 12月, 2017 2 次提交
  27. 13 12月, 2017 2 次提交
  28. 06 12月, 2017 1 次提交
  29. 05 12月, 2017 1 次提交
  30. 03 12月, 2017 1 次提交