1. 25 6月, 2019 7 次提交
    • B
      Auto merge of #61765 - Keruspe:rustbuild-cxx, r=alexcrichton · 40ab9d2b
      bors 提交于
      rustbuild: detect cxx for all targets
      
      Replaces #61544
      Fixes #59917
      
      We need CXX to build llvm-libunwind which can be enabled for alltargets.
      As we needed it for all hosts anyways, just move the detection so that it is ran for all targets (which contains all hosts) instead.
      40ab9d2b
    • B
      Auto merge of #62094 - oli-obk:zst_intern, r=eddyb · 10deeae3
      bors 提交于
       Don't ICE on mutable zst slices
      
      fixes #62045
      10deeae3
    • B
      Auto merge of #61572 - Aaron1011:fix/generator-ref, r=varkor · 53ae6d2e
      bors 提交于
      Fix HIR visit order
      
      Fixes #61442
      
      When rustc::middle::region::ScopeTree computes its yield_in_scope
      field, it relies on the HIR visitor order to properly compute which
      types must be live across yield points. In order for the computed scopes
      to agree with the generated MIR, we must ensure that expressions
      evaluated before a yield point are visited before the 'yield'
      expression.
      
      However, the visitor order for ExprKind::AssignOp
      was incorrect. The left-hand side of a compund assignment expression is
      evaluated before the right-hand side, but the right-hand expression was
      being visited before the left-hand expression. If the left-hand
      expression caused a new type to be introduced (e.g. through a
      deref-coercion), the new type would be incorrectly seen as occuring
      *after* the yield point, instead of before. This leads to a mismatch
      between the computed generator types and the MIR, since the MIR will
      correctly see the type as being live across the yield point.
      
      To fix this, we correct the visitor order for ExprKind::AssignOp
      to reflect the actual evaulation order.
      53ae6d2e
    • B
      Auto merge of #62100 - ehuss:update-cargo, r=alexcrichton · a5b17298
      bors 提交于
      Update cargo
      
      17 commits in 807429e1b6da4e2ec52488ef2f59e77068c31e1f..4c1fa54d10f58d69ac9ff55be68e1b1c25ecb816
      2019-06-11 14:06:10 +0000 to 2019-06-24 11:24:18 +0000
      - Fix typo in comment (rust-lang/cargo#7066)
      - travis: enforce formatting of subcrates as well (rust-lang/cargo#7063)
      - _cargo: Make function style consistent (rust-lang/cargo#7060)
      - Update some fix comments. (rust-lang/cargo#7061)
      - Stabilize default-run (rust-lang/cargo#7056)
      - Fix typo in comment (rust-lang/cargo#7054)
      - fix(fingerpring): do not touch intermediate artifacts (rust-lang/cargo#7050)
      - Resolver test/debug cleanup (rust-lang/cargo#7045)
      - Rename to_url → into_url (rust-lang/cargo#7048)
      - Remove needless lifetimes (rust-lang/cargo#7047)
      - Revert test directory cleaning change. (rust-lang/cargo#7042)
      - cargo book /reference/manifest: fix typo (rust-lang/cargo#7041)
      - Extract resolver tests to their own crate (rust-lang/cargo#7011)
      - ci: Do not install addons on rustfmt build jobs (rust-lang/cargo#7038)
      - Support absolute paths in dep-info files (rust-lang/cargo#7030)
      - ci: Run cargo fmt on all workspaces (rust-lang/cargo#7033)
      - Deprecated ONCE_INIT in favor of Once::new() (rust-lang/cargo#7031)
      a5b17298
    • B
      Auto merge of #61787 - ecstatic-morse:dataflow-split-block-sets, r=pnkfelix · 8aa42ed7
      bors 提交于
      rustc_mir: Hide initial block state when defining transfer functions
      
      This PR addresses [this FIXME](https://github.com/rust-lang/rust/blob/2887008e0ce0824be4e0e9562c22ea397b165c97/src/librustc_mir/dataflow/mod.rs#L594-L596).
      
      This makes `sets.on_entry` inaccessible in `{before_,}{statement,terminator}_effect`. This field was meant to allow implementors of `BitDenotation` to access the initial state for each block (optionally with the effect of all previous statements applied via `accumulates_intrablock_state`) while defining transfer functions.  However, the ability to set the initial value for the entry set of each basic block (except for START_BLOCK) no longer exists. As a result, this functionality is mostly useless, and when it *was* used it was used erroneously (see #62007).
      
      Since `on_entry` is now useless, we can also remove `BlockSets`, which held the `gen`, `kill`, and `on_entry` bitvectors and replace it with a `GenKill` struct. Variables of this type are called `trans` since they represent a transfer function. `GenKill`s are stored contiguously in `AllSets`, which reduces the number of bounds checks and may improve cache performance: one is almost never accessed without the other.
      
      Replacing `BlockSets` with `GenKill` allows us to define some new helper functions which streamline dataflow iteration and the dataflow-at-location APIs. Notably, `state_for_location` used a subtle side-effect of the `kill`/`kill_all` setters to apply the transfer function, and could be incorrect if a transfer function depended on effects of previous statements in the block on `gen_set`.
      
      Additionally, this PR merges `BitSetOperator` and `InitialFlow` into one trait. Since the value of `InitialFlow` defines the semantics of the `join` operation, there's no reason to have seperate traits for each. We can add a default impl of `join` which branches based on `BOTTOM_VALUE`.  This should get optimized away.
      8aa42ed7
    • E
      Update cargo · 342fa2be
      Eric Huss 提交于
      342fa2be
    • B
      Auto merge of #62081 - RalfJung:miri-pointer-checks, r=oli-obk · 7e08576e
      bors 提交于
      Refactor miri pointer checks
      
      Centralize bounds, alignment and NULL checking for memory accesses in one function: `memory.check_ptr_access`. That function also takes care of converting a `Scalar` to a `Pointer`, should that be needed.  Not all accesses need that though: if the access has size 0, `None` is returned. Everyone accessing memory based on a `Scalar` should use this method to get the `Pointer` they need.
      
      All operations on the `Allocation` work on `Pointer` inputs and expect all the checks to have happened (and will ICE if the bounds are violated). The operations on `Memory` work on `Scalar` inputs and do the checks themselves.
      
      The only other public method to check pointers is `memory.ptr_may_be_null`, which is needed in a few places. No need for `check_align` or similar methods. That makes the public API surface much easier to use and harder to mis-use.
      
      This should be largely no-functional-change, except that ZST accesses to a "true" pointer that is dangling or out-of-bounds are now considered UB. This is to be conservative wrt. whatever LLVM might be doing.
      
      While I am at it, this also removes the assumption that the vtable part of a `dyn Trait`-fat-pointer is a `Pointer` (as opposed to a pointer cast to an integer, stored as raw bits).
      
      r? @oli-obk
      7e08576e
  2. 24 6月, 2019 19 次提交
  3. 23 6月, 2019 14 次提交