未验证 提交 ff80cde9 编写于 作者: K kennytm

Rollup merge of #48971 - mark-i-m:fix_readmes, r=nikomatsakis

Move librustc_typeck READMEs to rustc guide

cc https://github.com/rust-lang-nursery/rustc-guide/issues/2 and #48478

Don't merge this before https://github.com/rust-lang-nursery/rustc-guide/pull/85

r? @nikomatsakis
NB: This crate is part of the Rust compiler. For an overview of the
compiler as a whole, see
[the README.md file found in `librustc`](../librustc/README.md).
For high-level intro to how type checking works in rustc, see the
[type checking] chapter of the [rustc guide].
The `rustc_typeck` crate contains the source for "type collection" and
"type checking", as well as a few other bits of related functionality.
(It draws heavily on the [type inferencing][infer] and
[trait solving][traits] code found in librustc.)
[infer]: ../librustc/infer/README.md
[traits]: ../librustc/traits/README.md
## Type collection
Type "collection" is the process of converting the types found in the
HIR (`hir::Ty`), which represent the syntactic things that the user
wrote, into the **internal representation** used by the compiler
(`Ty<'tcx>`) -- we also do similar conversions for where-clauses and
other bits of the function signature.
To try and get a sense for the difference, consider this function:
```rust
struct Foo { }
fn foo(x: Foo, y: self::Foo) { .. }
// ^^^ ^^^^^^^^^
```
Those two parameters `x` and `y` each have the same type: but they
will have distinct `hir::Ty` nodes. Those nodes will have different
spans, and of course they encode the path somewhat differently. But
once they are "collected" into `Ty<'tcx>` nodes, they will be
represented by the exact same internal type.
Collection is defined as a bundle of queries (e.g., `type_of`) for
computing information about the various functions, traits, and other
items in the crate being compiled. Note that each of these queries is
concerned with *interprocedural* things -- for example, for a function
definition, collection will figure out the type and signature of the
function, but it will not visit the *body* of the function in any way,
nor examine type annotations on local variables (that's the job of
type *checking*).
For more details, see the `collect` module.
## Type checking
TODO
[type checking]: https://rust-lang-nursery.github.io/rustc-guide/type-checking.html
[rustc guide]: https://rust-lang-nursery.github.io/rustc-guide/
# Method lookup
Method lookup can be rather complex due to the interaction of a number
of factors, such as self types, autoderef, trait lookup, etc. This
file provides an overview of the process. More detailed notes are in
the code itself, naturally.
One way to think of method lookup is that we convert an expression of
the form:
receiver.method(...)
into a more explicit UFCS form:
Trait::method(ADJ(receiver), ...) // for a trait call
ReceiverType::method(ADJ(receiver), ...) // for an inherent method call
Here `ADJ` is some kind of adjustment, which is typically a series of
autoderefs and then possibly an autoref (e.g., `&**receiver`). However
we sometimes do other adjustments and coercions along the way, in
particular unsizing (e.g., converting from `[T; n]` to `[T]`).
## The Two Phases
Method lookup is divided into two major phases: probing (`probe.rs`)
and confirmation (`confirm.rs`). The probe phase is when we decide
what method to call and how to adjust the receiver. The confirmation
phase "applies" this selection, updating the side-tables, unifying
type variables, and otherwise doing side-effectful things.
One reason for this division is to be more amenable to caching. The
probe phase produces a "pick" (`probe::Pick`), which is designed to be
cacheable across method-call sites. Therefore, it does not include
inference variables or other information.
## Probe phase
The probe phase (`probe.rs`) decides what method is being called and
how to adjust the receiver.
### Steps
The first thing that the probe phase does is to create a series of
*steps*. This is done by progressively dereferencing the receiver type
until it cannot be deref'd anymore, as well as applying an optional
"unsize" step. So if the receiver has type `Rc<Box<[T; 3]>>`, this
might yield:
Rc<Box<[T; 3]>>
Box<[T; 3]>
[T; 3]
[T]
### Candidate assembly
We then search along those steps to create a list of *candidates*. A
`Candidate` is a method item that might plausibly be the method being
invoked. For each candidate, we'll derive a "transformed self type"
that takes into account explicit self.
Candidates are grouped into two kinds, inherent and extension.
**Inherent candidates** are those that are derived from the
type of the receiver itself. So, if you have a receiver of some
nominal type `Foo` (e.g., a struct), any methods defined within an
impl like `impl Foo` are inherent methods. Nothing needs to be
imported to use an inherent method, they are associated with the type
itself (note that inherent impls can only be defined in the same
module as the type itself).
FIXME: Inherent candidates are not always derived from impls. If you
have a trait object, such as a value of type `Box<ToString>`, then the
trait methods (`to_string()`, in this case) are inherently associated
with it. Another case is type parameters, in which case the methods of
their bounds are inherent. However, this part of the rules is subject
to change: when DST's "impl Trait for Trait" is complete, trait object
dispatch could be subsumed into trait matching, and the type parameter
behavior should be reconsidered in light of where clauses.
**Extension candidates** are derived from imported traits. If I have
the trait `ToString` imported, and I call `to_string()` on a value of
type `T`, then we will go off to find out whether there is an impl of
`ToString` for `T`. These kinds of method calls are called "extension
methods". They can be defined in any module, not only the one that
defined `T`. Furthermore, you must import the trait to call such a
method.
So, let's continue our example. Imagine that we were calling a method
`foo` with the receiver `Rc<Box<[T; 3]>>` and there is a trait `Foo`
that defines it with `&self` for the type `Rc<U>` as well as a method
on the type `Box` that defines `Foo` but with `&mut self`. Then we
might have two candidates:
&Rc<Box<[T; 3]>> from the impl of `Foo` for `Rc<U>` where `U=Box<T; 3]>
&mut Box<[T; 3]>> from the inherent impl on `Box<U>` where `U=[T; 3]`
### Candidate search
Finally, to actually pick the method, we will search down the steps,
trying to match the receiver type against the candidate types. At
each step, we also consider an auto-ref and auto-mut-ref to see whether
that makes any of the candidates match. We pick the first step where
we find a match.
In the case of our example, the first step is `Rc<Box<[T; 3]>>`,
which does not itself match any candidate. But when we autoref it, we
get the type `&Rc<Box<[T; 3]>>` which does match. We would then
recursively consider all where-clauses that appear on the impl: if
those match (or we cannot rule out that they do), then this is the
method we would pick. Otherwise, we would continue down the series of
steps.
......@@ -8,7 +8,9 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Method lookup: the secret sauce of Rust. See `README.md`.
//! Method lookup: the secret sauce of Rust. See the [rustc guide] chapter.
//!
//! [rustc guide]: https://rust-lang-nursery.github.io/rustc-guide/method-lookup.html
use check::FnCtxt;
use hir::def::Def;
......
## Variance of type and lifetime parameters
This file infers the variance of type and lifetime parameters. The
algorithm is taken from Section 4 of the paper "Taming the Wildcards:
Combining Definition- and Use-Site Variance" published in PLDI'11 and
written by Altidor et al., and hereafter referred to as The Paper.
This inference is explicitly designed *not* to consider the uses of
types within code. To determine the variance of type parameters
defined on type `X`, we only consider the definition of the type `X`
and the definitions of any types it references.
We only infer variance for type parameters found on *data types*
like structs and enums. In these cases, there is fairly straightforward
explanation for what variance means. The variance of the type
or lifetime parameters defines whether `T<A>` is a subtype of `T<B>`
(resp. `T<'a>` and `T<'b>`) based on the relationship of `A` and `B`
(resp. `'a` and `'b`).
We do not infer variance for type parameters found on traits, fns,
or impls. Variance on trait parameters can make indeed make sense
(and we used to compute it) but it is actually rather subtle in
meaning and not that useful in practice, so we removed it. See the
addendum for some details. Variances on fn/impl parameters, otoh,
doesn't make sense because these parameters are instantiated and
then forgotten, they don't persist in types or compiled
byproducts.
### The algorithm
The basic idea is quite straightforward. We iterate over the types
defined and, for each use of a type parameter X, accumulate a
constraint indicating that the variance of X must be valid for the
variance of that use site. We then iteratively refine the variance of
X until all constraints are met. There is *always* a sol'n, because at
the limit we can declare all type parameters to be invariant and all
constraints will be satisfied.
As a simple example, consider:
enum Option<A> { Some(A), None }
enum OptionalFn<B> { Some(|B|), None }
enum OptionalMap<C> { Some(|C| -> C), None }
Here, we will generate the constraints:
1. V(A) <= +
2. V(B) <= -
3. V(C) <= +
4. V(C) <= -
These indicate that (1) the variance of A must be at most covariant;
(2) the variance of B must be at most contravariant; and (3, 4) the
variance of C must be at most covariant *and* contravariant. All of these
results are based on a variance lattice defined as follows:
* Top (bivariant)
- +
o Bottom (invariant)
Based on this lattice, the solution `V(A)=+`, `V(B)=-`, `V(C)=o` is the
optimal solution. Note that there is always a naive solution which
just declares all variables to be invariant.
You may be wondering why fixed-point iteration is required. The reason
is that the variance of a use site may itself be a function of the
variance of other type parameters. In full generality, our constraints
take the form:
V(X) <= Term
Term := + | - | * | o | V(X) | Term x Term
Here the notation `V(X)` indicates the variance of a type/region
parameter `X` with respect to its defining class. `Term x Term`
represents the "variance transform" as defined in the paper:
> If the variance of a type variable `X` in type expression `E` is `V2`
and the definition-site variance of the [corresponding] type parameter
of a class `C` is `V1`, then the variance of `X` in the type expression
`C<E>` is `V3 = V1.xform(V2)`.
### Constraints
If I have a struct or enum with where clauses:
struct Foo<T:Bar> { ... }
you might wonder whether the variance of `T` with respect to `Bar`
affects the variance `T` with respect to `Foo`. I claim no. The
reason: assume that `T` is invariant w/r/t `Bar` but covariant w/r/t
`Foo`. And then we have a `Foo<X>` that is upcast to `Foo<Y>`, where
`X <: Y`. However, while `X : Bar`, `Y : Bar` does not hold. In that
case, the upcast will be illegal, but not because of a variance
failure, but rather because the target type `Foo<Y>` is itself just
not well-formed. Basically we get to assume well-formedness of all
types involved before considering variance.
#### Dependency graph management
Because variance is a whole-crate inference, its dependency graph
can become quite muddled if we are not careful. To resolve this, we refactor
into two queries:
- `crate_variances` computes the variance for all items in the current crate.
- `variances_of` accesses the variance for an individual reading; it
works by requesting `crate_variances` and extracting the relevant data.
If you limit yourself to reading `variances_of`, your code will only
depend then on the inference inferred for that particular item.
Ultimately, this setup relies on the red-green algorithm.
In particular, every variance query ultimately depends on -- effectively --
all type definitions in the entire crate (through `crate_variances`),
but since most changes will not result in a change
to the actual results from variance inference,
the `variances_of` query will wind up being considered green after it is re-evaluated.
### Addendum: Variance on traits
As mentioned above, we used to permit variance on traits. This was
computed based on the appearance of trait type parameters in
method signatures and was used to represent the compatibility of
vtables in trait objects (and also "virtual" vtables or dictionary
in trait bounds). One complication was that variance for
associated types is less obvious, since they can be projected out
and put to myriad uses, so it's not clear when it is safe to allow
`X<A>::Bar` to vary (or indeed just what that means). Moreover (as
covered below) all inputs on any trait with an associated type had
to be invariant, limiting the applicability. Finally, the
annotations (`MarkerTrait`, `PhantomFn`) needed to ensure that all
trait type parameters had a variance were confusing and annoying
for little benefit.
Just for historical reference,I am going to preserve some text indicating
how one could interpret variance and trait matching.
#### Variance and object types
Just as with structs and enums, we can decide the subtyping
relationship between two object types `&Trait<A>` and `&Trait<B>`
based on the relationship of `A` and `B`. Note that for object
types we ignore the `Self` type parameter -- it is unknown, and
the nature of dynamic dispatch ensures that we will always call a
function that is expected the appropriate `Self` type. However, we
must be careful with the other type parameters, or else we could
end up calling a function that is expecting one type but provided
another.
To see what I mean, consider a trait like so:
trait ConvertTo<A> {
fn convertTo(&self) -> A;
}
Intuitively, If we had one object `O=&ConvertTo<Object>` and another
`S=&ConvertTo<String>`, then `S <: O` because `String <: Object`
(presuming Java-like "string" and "object" types, my go to examples
for subtyping). The actual algorithm would be to compare the
(explicit) type parameters pairwise respecting their variance: here,
the type parameter A is covariant (it appears only in a return
position), and hence we require that `String <: Object`.
You'll note though that we did not consider the binding for the
(implicit) `Self` type parameter: in fact, it is unknown, so that's
good. The reason we can ignore that parameter is precisely because we
don't need to know its value until a call occurs, and at that time (as
you said) the dynamic nature of virtual dispatch means the code we run
will be correct for whatever value `Self` happens to be bound to for
the particular object whose method we called. `Self` is thus different
from `A`, because the caller requires that `A` be known in order to
know the return type of the method `convertTo()`. (As an aside, we
have rules preventing methods where `Self` appears outside of the
receiver position from being called via an object.)
#### Trait variance and vtable resolution
But traits aren't only used with objects. They're also used when
deciding whether a given impl satisfies a given trait bound. To set the
scene here, imagine I had a function:
fn convertAll<A,T:ConvertTo<A>>(v: &[T]) {
...
}
Now imagine that I have an implementation of `ConvertTo` for `Object`:
impl ConvertTo<i32> for Object { ... }
And I want to call `convertAll` on an array of strings. Suppose
further that for whatever reason I specifically supply the value of
`String` for the type parameter `T`:
let mut vector = vec!["string", ...];
convertAll::<i32, String>(vector);
Is this legal? To put another way, can we apply the `impl` for
`Object` to the type `String`? The answer is yes, but to see why
we have to expand out what will happen:
- `convertAll` will create a pointer to one of the entries in the
vector, which will have type `&String`
- It will then call the impl of `convertTo()` that is intended
for use with objects. This has the type:
fn(self: &Object) -> i32
It is ok to provide a value for `self` of type `&String` because
`&String <: &Object`.
OK, so intuitively we want this to be legal, so let's bring this back
to variance and see whether we are computing the correct result. We
must first figure out how to phrase the question "is an impl for
`Object,i32` usable where an impl for `String,i32` is expected?"
Maybe it's helpful to think of a dictionary-passing implementation of
type classes. In that case, `convertAll()` takes an implicit parameter
representing the impl. In short, we *have* an impl of type:
V_O = ConvertTo<i32> for Object
and the function prototype expects an impl of type:
V_S = ConvertTo<i32> for String
As with any argument, this is legal if the type of the value given
(`V_O`) is a subtype of the type expected (`V_S`). So is `V_O <: V_S`?
The answer will depend on the variance of the various parameters. In
this case, because the `Self` parameter is contravariant and `A` is
covariant, it means that:
V_O <: V_S iff
i32 <: i32
String <: Object
These conditions are satisfied and so we are happy.
#### Variance and associated types
Traits with associated types -- or at minimum projection
expressions -- must be invariant with respect to all of their
inputs. To see why this makes sense, consider what subtyping for a
trait reference means:
<T as Trait> <: <U as Trait>
means that if I know that `T as Trait`, I also know that `U as
Trait`. Moreover, if you think of it as dictionary passing style,
it means that a dictionary for `<T as Trait>` is safe to use where
a dictionary for `<U as Trait>` is expected.
The problem is that when you can project types out from `<T as
Trait>`, the relationship to types projected out of `<U as Trait>`
is completely unknown unless `T==U` (see #21726 for more
details). Making `Trait` invariant ensures that this is true.
Another related reason is that if we didn't make traits with
associated types invariant, then projection is no longer a
function with a single result. Consider:
```
trait Identity { type Out; fn foo(&self); }
impl<T> Identity for T { type Out = T; ... }
```
Now if I have `<&'static () as Identity>::Out`, this can be
validly derived as `&'a ()` for any `'a`:
<&'a () as Identity> <: <&'static () as Identity>
if &'static () < : &'a () -- Identity is contravariant in Self
if 'static : 'a -- Subtyping rules for relations
This change otoh means that `<'static () as Identity>::Out` is
always `&'static ()` (which might then be upcast to `'a ()`,
separately). This was helpful in solving #21750.
......@@ -8,8 +8,10 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
//! Module for inferring the variance of type and lifetime
//! parameters. See README.md for details.
//! Module for inferring the variance of type and lifetime parameters. See the [rustc guide]
//! chapter for more info.
//!
//! [rustc guide]: https://rust-lang-nursery.github.io/rustc-guide/variance.html
use arena;
use rustc::hir;
......
......@@ -87,7 +87,10 @@ pub fn determine_parameters_to_be_inferred<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>
lang_items: lang_items(tcx),
};
// See README.md for a discussion on dep-graph management.
// See the following for a discussion on dep-graph management.
//
// - https://rust-lang-nursery.github.io/rustc-guide/query.html
// - https://rust-lang-nursery.github.io/rustc-guide/variance.html
tcx.hir.krate().visit_all_item_likes(&mut terms_cx);
terms_cx
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册