提交 cd5c8235 编写于 作者: S Steve Klabnik 提交者: Alex Crichton

/*! -> //!

Sister pull request of https://github.com/rust-lang/rust/pull/19288, but
for the other style of block doc comment.
上级 fac5a076
......@@ -8,58 +8,56 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* Generic hashing support.
*
* This module provides a generic way to compute the hash of a value. The
* simplest way to make a type hashable is to use `#[deriving(Hash)]`:
*
* # Example
*
* ```rust
* use std::hash;
* use std::hash::Hash;
*
* #[deriving(Hash)]
* struct Person {
* id: uint,
* name: String,
* phone: u64,
* }
*
* let person1 = Person { id: 5, name: "Janet".to_string(), phone: 555_666_7777 };
* let person2 = Person { id: 5, name: "Bob".to_string(), phone: 555_666_7777 };
*
* assert!(hash::hash(&person1) != hash::hash(&person2));
* ```
*
* If you need more control over how a value is hashed, you need to implement
* the trait `Hash`:
*
* ```rust
* use std::hash;
* use std::hash::Hash;
* use std::hash::sip::SipState;
*
* struct Person {
* id: uint,
* name: String,
* phone: u64,
* }
*
* impl Hash for Person {
* fn hash(&self, state: &mut SipState) {
* self.id.hash(state);
* self.phone.hash(state);
* }
* }
*
* let person1 = Person { id: 5, name: "Janet".to_string(), phone: 555_666_7777 };
* let person2 = Person { id: 5, name: "Bob".to_string(), phone: 555_666_7777 };
*
* assert!(hash::hash(&person1) == hash::hash(&person2));
* ```
*/
//! Generic hashing support.
//!
//! This module provides a generic way to compute the hash of a value. The
//! simplest way to make a type hashable is to use `#[deriving(Hash)]`:
//!
//! # Example
//!
//! ```rust
//! use std::hash;
//! use std::hash::Hash;
//!
//! #[deriving(Hash)]
//! struct Person {
//! id: uint,
//! name: String,
//! phone: u64,
//! }
//!
//! let person1 = Person { id: 5, name: "Janet".to_string(), phone: 555_666_7777 };
//! let person2 = Person { id: 5, name: "Bob".to_string(), phone: 555_666_7777 };
//!
//! assert!(hash::hash(&person1) != hash::hash(&person2));
//! ```
//!
//! If you need more control over how a value is hashed, you need to implement
//! the trait `Hash`:
//!
//! ```rust
//! use std::hash;
//! use std::hash::Hash;
//! use std::hash::sip::SipState;
//!
//! struct Person {
//! id: uint,
//! name: String,
//! phone: u64,
//! }
//!
//! impl Hash for Person {
//! fn hash(&self, state: &mut SipState) {
//! self.id.hash(state);
//! self.phone.hash(state);
//! }
//! }
//!
//! let person1 = Person { id: 5, name: "Janet".to_string(), phone: 555_666_7777 };
//! let person2 = Person { id: 5, name: "Bob".to_string(), phone: 555_666_7777 };
//!
//! assert!(hash::hash(&person1) == hash::hash(&person2));
//! ```
#![allow(unused_must_use)]
......
......@@ -8,18 +8,16 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! The `Clone` trait for types that cannot be 'implicitly copied'
In Rust, some simple types are "implicitly copyable" and when you
assign them or pass them as arguments, the receiver will get a copy,
leaving the original value in place. These types do not require
allocation to copy and do not have finalizers (i.e. they do not
contain owned boxes or implement `Drop`), so the compiler considers
them cheap and safe to copy. For other types copies must be made
explicitly, by convention implementing the `Clone` trait and calling
the `clone` method.
*/
//! The `Clone` trait for types that cannot be 'implicitly copied'
//!
//! In Rust, some simple types are "implicitly copyable" and when you
//! assign them or pass them as arguments, the receiver will get a copy,
//! leaving the original value in place. These types do not require
//! allocation to copy and do not have finalizers (i.e. they do not
//! contain owned boxes or implement `Drop`), so the compiler considers
//! them cheap and safe to copy. For other types copies must be made
//! explicitly, by convention implementing the `Clone` trait and calling
//! the `clone` method.
#![unstable]
......
......@@ -8,27 +8,25 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
The Finally trait provides a method, `finally` on
stack closures that emulates Java-style try/finally blocks.
Using the `finally` method is sometimes convenient, but the type rules
prohibit any shared, mutable state between the "try" case and the
"finally" case. For advanced cases, the `try_finally` function can
also be used. See that function for more details.
# Example
```
use std::finally::Finally;
(|| {
// ...
}).finally(|| {
// this code is always run
})
```
*/
//! The Finally trait provides a method, `finally` on
//! stack closures that emulates Java-style try/finally blocks.
//!
//! Using the `finally` method is sometimes convenient, but the type rules
//! prohibit any shared, mutable state between the "try" case and the
//! "finally" case. For advanced cases, the `try_finally` function can
//! also be used. See that function for more details.
//!
//! # Example
//!
//! ```
//! use std::finally::Finally;
//!
//! (|| {
//! // ...
//! }).finally(|| {
//! // this code is always run
//! })
//! ```
#![experimental]
......
......@@ -8,38 +8,36 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! rustc compiler intrinsics.
The corresponding definitions are in librustc/middle/trans/foreign.rs.
# Volatiles
The volatile intrinsics provide operations intended to act on I/O
memory, which are guaranteed to not be reordered by the compiler
across other volatile intrinsics. See the LLVM documentation on
[[volatile]].
[volatile]: http://llvm.org/docs/LangRef.html#volatile-memory-accesses
# Atomics
The atomic intrinsics provide common atomic operations on machine
words, with multiple possible memory orderings. They obey the same
semantics as C++11. See the LLVM documentation on [[atomics]].
[atomics]: http://llvm.org/docs/Atomics.html
A quick refresher on memory ordering:
* Acquire - a barrier for acquiring a lock. Subsequent reads and writes
take place after the barrier.
* Release - a barrier for releasing a lock. Preceding reads and writes
take place before the barrier.
* Sequentially consistent - sequentially consistent operations are
guaranteed to happen in order. This is the standard mode for working
with atomic types and is equivalent to Java's `volatile`.
*/
//! rustc compiler intrinsics.
//!
//! The corresponding definitions are in librustc/middle/trans/foreign.rs.
//!
//! # Volatiles
//!
//! The volatile intrinsics provide operations intended to act on I/O
//! memory, which are guaranteed to not be reordered by the compiler
//! across other volatile intrinsics. See the LLVM documentation on
//! [[volatile]].
//!
//! [volatile]: http://llvm.org/docs/LangRef.html#volatile-memory-accesses
//!
//! # Atomics
//!
//! The atomic intrinsics provide common atomic operations on machine
//! words, with multiple possible memory orderings. They obey the same
//! semantics as C++11. See the LLVM documentation on [[atomics]].
//!
//! [atomics]: http://llvm.org/docs/Atomics.html
//!
//! A quick refresher on memory ordering:
//!
//! * Acquire - a barrier for acquiring a lock. Subsequent reads and writes
//! take place after the barrier.
//! * Release - a barrier for releasing a lock. Preceding reads and writes
//! take place before the barrier.
//! * Sequentially consistent - sequentially consistent operations are
//! guaranteed to happen in order. This is the standard mode for working
//! with atomic types and is equivalent to Java's `volatile`.
#![experimental]
#![allow(missing_docs)]
......
......@@ -8,55 +8,51 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Composable external iterators
# The `Iterator` trait
This module defines Rust's core iteration trait. The `Iterator` trait has one
unimplemented method, `next`. All other methods are derived through default
methods to perform operations such as `zip`, `chain`, `enumerate`, and `fold`.
The goal of this module is to unify iteration across all containers in Rust.
An iterator can be considered as a state machine which is used to track which
element will be yielded next.
There are various extensions also defined in this module to assist with various
types of iteration, such as the `DoubleEndedIterator` for iterating in reverse,
the `FromIterator` trait for creating a container from an iterator, and much
more.
## Rust's `for` loop
The special syntax used by rust's `for` loop is based around the `Iterator`
trait defined in this module. For loops can be viewed as a syntactical expansion
into a `loop`, for example, the `for` loop in this example is essentially
translated to the `loop` below.
```rust
let values = vec![1i, 2, 3];
// "Syntactical sugar" taking advantage of an iterator
for &x in values.iter() {
println!("{}", x);
}
// Rough translation of the iteration without a `for` iterator.
let mut it = values.iter();
loop {
match it.next() {
Some(&x) => {
println!("{}", x);
}
None => { break }
}
}
```
This `for` loop syntax can be applied to any iterator over any type.
*/
//! Composable external iterators
//!
//! # The `Iterator` trait
//!
//! This module defines Rust's core iteration trait. The `Iterator` trait has one
//! unimplemented method, `next`. All other methods are derived through default
//! methods to perform operations such as `zip`, `chain`, `enumerate`, and `fold`.
//!
//! The goal of this module is to unify iteration across all containers in Rust.
//! An iterator can be considered as a state machine which is used to track which
//! element will be yielded next.
//!
//! There are various extensions also defined in this module to assist with various
//! types of iteration, such as the `DoubleEndedIterator` for iterating in reverse,
//! the `FromIterator` trait for creating a container from an iterator, and much
//! more.
//!
//! ## Rust's `for` loop
//!
//! The special syntax used by rust's `for` loop is based around the `Iterator`
//! trait defined in this module. For loops can be viewed as a syntactical expansion
//! into a `loop`, for example, the `for` loop in this example is essentially
//! translated to the `loop` below.
//!
//! ```rust
//! let values = vec![1i, 2, 3];
//!
//! // "Syntactical sugar" taking advantage of an iterator
//! for &x in values.iter() {
//! println!("{}", x);
//! }
//!
//! // Rough translation of the iteration without a `for` iterator.
//! let mut it = values.iter();
//! loop {
//! match it.next() {
//! Some(&x) => {
//! println!("{}", x);
//! }
//! None => { break }
//! }
//! }
//! ```
//!
//! This `for` loop syntax can be applied to any iterator over any type.
pub use self::MinMaxResult::*;
......
......@@ -8,17 +8,14 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Primitive traits representing basic 'kinds' of types
Rust types can be classified in various useful ways according to
intrinsic properties of the type. These classifications, often called
'kinds', are represented as traits.
They cannot be implemented by user code, but are instead implemented
by the compiler automatically for the types to which they apply.
*/
//! Primitive traits representing basic 'kinds' of types
//!
//! Rust types can be classified in various useful ways according to
//! intrinsic properties of the type. These classifications, often called
//! 'kinds', are represented as traits.
//!
//! They cannot be implemented by user code, but are instead implemented
//! by the compiler automatically for the types to which they apply.
/// Types able to be transferred across task boundaries.
#[lang="send"]
......
......@@ -8,52 +8,48 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
*
* Overloadable operators
*
* Implementing these traits allows you to get an effect similar to
* overloading operators.
*
* The values for the right hand side of an operator are automatically
* borrowed, so `a + b` is sugar for `a.add(&b)`.
*
* All of these traits are imported by the prelude, so they are available in
* every Rust program.
*
* # Example
*
* This example creates a `Point` struct that implements `Add` and `Sub`, and then
* demonstrates adding and subtracting two `Point`s.
*
* ```rust
* #[deriving(Show)]
* struct Point {
* x: int,
* y: int
* }
*
* impl Add<Point, Point> for Point {
* fn add(&self, other: &Point) -> Point {
* Point {x: self.x + other.x, y: self.y + other.y}
* }
* }
*
* impl Sub<Point, Point> for Point {
* fn sub(&self, other: &Point) -> Point {
* Point {x: self.x - other.x, y: self.y - other.y}
* }
* }
* fn main() {
* println!("{}", Point {x: 1, y: 0} + Point {x: 2, y: 3});
* println!("{}", Point {x: 1, y: 0} - Point {x: 2, y: 3});
* }
* ```
*
* See the documentation for each trait for a minimum implementation that prints
* something to the screen.
*
*/
//! Overloadable operators
//!
//! Implementing these traits allows you to get an effect similar to
//! overloading operators.
//!
//! The values for the right hand side of an operator are automatically
//! borrowed, so `a + b` is sugar for `a.add(&b)`.
//!
//! All of these traits are imported by the prelude, so they are available in
//! every Rust program.
//!
//! # Example
//!
//! This example creates a `Point` struct that implements `Add` and `Sub`, and then
//! demonstrates adding and subtracting two `Point`s.
//!
//! ```rust
//! #[deriving(Show)]
//! struct Point {
//! x: int,
//! y: int
//! }
//!
//! impl Add<Point, Point> for Point {
//! fn add(&self, other: &Point) -> Point {
//! Point {x: self.x + other.x, y: self.y + other.y}
//! }
//! }
//!
//! impl Sub<Point, Point> for Point {
//! fn sub(&self, other: &Point) -> Point {
//! Point {x: self.x - other.x, y: self.y - other.y}
//! }
//! }
//! fn main() {
//! println!("{}", Point {x: 1, y: 0} + Point {x: 2, y: 3});
//! println!("{}", Point {x: 1, y: 0} - Point {x: 2, y: 3});
//! }
//! ```
//!
//! See the documentation for each trait for a minimum implementation that prints
//! something to the screen.
use kinds::Sized;
......
......@@ -8,15 +8,11 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Simple [DEFLATE][def]-based compression. This is a wrapper around the
[`miniz`][mz] library, which is a one-file pure-C implementation of zlib.
[def]: https://en.wikipedia.org/wiki/DEFLATE
[mz]: https://code.google.com/p/miniz/
*/
//! Simple [DEFLATE][def]-based compression. This is a wrapper around the
//! [`miniz`][mz] library, which is a one-file pure-C implementation of zlib.
//!
//! [def]: https://en.wikipedia.org/wiki/DEFLATE
//! [mz]: https://code.google.com/p/miniz/
#![crate_name = "flate"]
#![experimental]
......
......@@ -8,260 +8,258 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! Generate files suitable for use with [Graphviz](http://www.graphviz.org/)
The `render` function generates output (e.g. an `output.dot` file) for
use with [Graphviz](http://www.graphviz.org/) by walking a labelled
graph. (Graphviz can then automatically lay out the nodes and edges
of the graph, and also optionally render the graph as an image or
other [output formats](
http://www.graphviz.org/content/output-formats), such as SVG.)
Rather than impose some particular graph data structure on clients,
this library exposes two traits that clients can implement on their
own structs before handing them over to the rendering function.
Note: This library does not yet provide access to the full
expressiveness of the [DOT language](
http://www.graphviz.org/doc/info/lang.html). For example, there are
many [attributes](http://www.graphviz.org/content/attrs) related to
providing layout hints (e.g. left-to-right versus top-down, which
algorithm to use, etc). The current intention of this library is to
emit a human-readable .dot file with very regular structure suitable
for easy post-processing.
# Examples
The first example uses a very simple graph representation: a list of
pairs of ints, representing the edges (the node set is implicit).
Each node label is derived directly from the int representing the node,
while the edge labels are all empty strings.
This example also illustrates how to use `CowVec` to return
an owned vector or a borrowed slice as appropriate: we construct the
node vector from scratch, but borrow the edge list (rather than
constructing a copy of all the edges from scratch).
The output from this example renders five nodes, with the first four
forming a diamond-shaped acyclic graph and then pointing to the fifth
which is cyclic.
```rust
use graphviz as dot;
type Nd = int;
type Ed = (int,int);
struct Edges(Vec<Ed>);
pub fn render_to<W:Writer>(output: &mut W) {
let edges = Edges(vec!((0,1), (0,2), (1,3), (2,3), (3,4), (4,4)));
dot::render(&edges, output).unwrap()
}
impl<'a> dot::Labeller<'a, Nd, Ed> for Edges {
fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example1").unwrap() }
fn node_id(&'a self, n: &Nd) -> dot::Id<'a> {
dot::Id::new(format!("N{}", *n)).unwrap()
}
}
impl<'a> dot::GraphWalk<'a, Nd, Ed> for Edges {
fn nodes(&self) -> dot::Nodes<'a,Nd> {
// (assumes that |N| \approxeq |E|)
let &Edges(ref v) = self;
let mut nodes = Vec::with_capacity(v.len());
for &(s,t) in v.iter() {
nodes.push(s); nodes.push(t);
}
nodes.sort();
nodes.dedup();
nodes.into_cow()
}
fn edges(&'a self) -> dot::Edges<'a,Ed> {
let &Edges(ref edges) = self;
edges.as_slice().into_cow()
}
fn source(&self, e: &Ed) -> Nd { let &(s,_) = e; s }
fn target(&self, e: &Ed) -> Nd { let &(_,t) = e; t }
}
# pub fn main() { render_to(&mut Vec::new()) }
```
```no_run
# pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
pub fn main() {
use std::io::File;
let mut f = File::create(&Path::new("example1.dot"));
render_to(&mut f)
}
```
Output from first example (in `example1.dot`):
```ignore
digraph example1 {
N0[label="N0"];
N1[label="N1"];
N2[label="N2"];
N3[label="N3"];
N4[label="N4"];
N0 -> N1[label=""];
N0 -> N2[label=""];
N1 -> N3[label=""];
N2 -> N3[label=""];
N3 -> N4[label=""];
N4 -> N4[label=""];
}
```
The second example illustrates using `node_label` and `edge_label` to
add labels to the nodes and edges in the rendered graph. The graph
here carries both `nodes` (the label text to use for rendering a
particular node), and `edges` (again a list of `(source,target)`
indices).
This example also illustrates how to use a type (in this case the edge
type) that shares substructure with the graph: the edge type here is a
direct reference to the `(source,target)` pair stored in the graph's
internal vector (rather than passing around a copy of the pair
itself). Note that this implies that `fn edges(&'a self)` must
construct a fresh `Vec<&'a (uint,uint)>` from the `Vec<(uint,uint)>`
edges stored in `self`.
Since both the set of nodes and the set of edges are always
constructed from scratch via iterators, we use the `collect()` method
from the `Iterator` trait to collect the nodes and edges into freshly
constructed growable `Vec` values (rather use the `into_cow`
from the `IntoCow` trait as was used in the first example
above).
The output from this example renders four nodes that make up the
Hasse-diagram for the subsets of the set `{x, y}`. Each edge is
labelled with the &sube; character (specified using the HTML character
entity `&sube`).
```rust
use graphviz as dot;
type Nd = uint;
type Ed<'a> = &'a (uint, uint);
struct Graph { nodes: Vec<&'static str>, edges: Vec<(uint,uint)> }
pub fn render_to<W:Writer>(output: &mut W) {
let nodes = vec!("{x,y}","{x}","{y}","{}");
let edges = vec!((0,1), (0,2), (1,3), (2,3));
let graph = Graph { nodes: nodes, edges: edges };
dot::render(&graph, output).unwrap()
}
impl<'a> dot::Labeller<'a, Nd, Ed<'a>> for Graph {
fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example2").unwrap() }
fn node_id(&'a self, n: &Nd) -> dot::Id<'a> {
dot::Id::new(format!("N{}", n)).unwrap()
}
fn node_label<'a>(&'a self, n: &Nd) -> dot::LabelText<'a> {
dot::LabelStr(self.nodes[*n].as_slice().into_cow())
}
fn edge_label<'a>(&'a self, _: &Ed) -> dot::LabelText<'a> {
dot::LabelStr("&sube;".into_cow())
}
}
impl<'a> dot::GraphWalk<'a, Nd, Ed<'a>> for Graph {
fn nodes(&self) -> dot::Nodes<'a,Nd> { range(0,self.nodes.len()).collect() }
fn edges(&'a self) -> dot::Edges<'a,Ed<'a>> { self.edges.iter().collect() }
fn source(&self, e: &Ed) -> Nd { let & &(s,_) = e; s }
fn target(&self, e: &Ed) -> Nd { let & &(_,t) = e; t }
}
# pub fn main() { render_to(&mut Vec::new()) }
```
```no_run
# pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
pub fn main() {
use std::io::File;
let mut f = File::create(&Path::new("example2.dot"));
render_to(&mut f)
}
```
The third example is similar to the second, except now each node and
edge now carries a reference to the string label for each node as well
as that node's index. (This is another illustration of how to share
structure with the graph itself, and why one might want to do so.)
The output from this example is the same as the second example: the
Hasse-diagram for the subsets of the set `{x, y}`.
```rust
use graphviz as dot;
type Nd<'a> = (uint, &'a str);
type Ed<'a> = (Nd<'a>, Nd<'a>);
struct Graph { nodes: Vec<&'static str>, edges: Vec<(uint,uint)> }
pub fn render_to<W:Writer>(output: &mut W) {
let nodes = vec!("{x,y}","{x}","{y}","{}");
let edges = vec!((0,1), (0,2), (1,3), (2,3));
let graph = Graph { nodes: nodes, edges: edges };
dot::render(&graph, output).unwrap()
}
impl<'a> dot::Labeller<'a, Nd<'a>, Ed<'a>> for Graph {
fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example3").unwrap() }
fn node_id(&'a self, n: &Nd<'a>) -> dot::Id<'a> {
dot::Id::new(format!("N{}", n.val0())).unwrap()
}
fn node_label<'a>(&'a self, n: &Nd<'a>) -> dot::LabelText<'a> {
let &(i, _) = n;
dot::LabelStr(self.nodes[i].as_slice().into_cow())
}
fn edge_label<'a>(&'a self, _: &Ed<'a>) -> dot::LabelText<'a> {
dot::LabelStr("&sube;".into_cow())
}
}
impl<'a> dot::GraphWalk<'a, Nd<'a>, Ed<'a>> for Graph {
fn nodes(&'a self) -> dot::Nodes<'a,Nd<'a>> {
self.nodes.iter().map(|s|s.as_slice()).enumerate().collect()
}
fn edges(&'a self) -> dot::Edges<'a,Ed<'a>> {
self.edges.iter()
.map(|&(i,j)|((i, self.nodes[i].as_slice()),
(j, self.nodes[j].as_slice())))
.collect()
}
fn source(&self, e: &Ed<'a>) -> Nd<'a> { let &(s,_) = e; s }
fn target(&self, e: &Ed<'a>) -> Nd<'a> { let &(_,t) = e; t }
}
# pub fn main() { render_to(&mut Vec::new()) }
```
```no_run
# pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
pub fn main() {
use std::io::File;
let mut f = File::create(&Path::new("example3.dot"));
render_to(&mut f)
}
```
# References
* [Graphviz](http://www.graphviz.org/)
* [DOT language](http://www.graphviz.org/doc/info/lang.html)
*/
//! Generate files suitable for use with [Graphviz](http://www.graphviz.org/)
//!
//! The `render` function generates output (e.g. an `output.dot` file) for
//! use with [Graphviz](http://www.graphviz.org/) by walking a labelled
//! graph. (Graphviz can then automatically lay out the nodes and edges
//! of the graph, and also optionally render the graph as an image or
//! other [output formats](
//! http://www.graphviz.org/content/output-formats), such as SVG.)
//!
//! Rather than impose some particular graph data structure on clients,
//! this library exposes two traits that clients can implement on their
//! own structs before handing them over to the rendering function.
//!
//! Note: This library does not yet provide access to the full
//! expressiveness of the [DOT language](
//! http://www.graphviz.org/doc/info/lang.html). For example, there are
//! many [attributes](http://www.graphviz.org/content/attrs) related to
//! providing layout hints (e.g. left-to-right versus top-down, which
//! algorithm to use, etc). The current intention of this library is to
//! emit a human-readable .dot file with very regular structure suitable
//! for easy post-processing.
//!
//! # Examples
//!
//! The first example uses a very simple graph representation: a list of
//! pairs of ints, representing the edges (the node set is implicit).
//! Each node label is derived directly from the int representing the node,
//! while the edge labels are all empty strings.
//!
//! This example also illustrates how to use `CowVec` to return
//! an owned vector or a borrowed slice as appropriate: we construct the
//! node vector from scratch, but borrow the edge list (rather than
//! constructing a copy of all the edges from scratch).
//!
//! The output from this example renders five nodes, with the first four
//! forming a diamond-shaped acyclic graph and then pointing to the fifth
//! which is cyclic.
//!
//! ```rust
//! use graphviz as dot;
//!
//! type Nd = int;
//! type Ed = (int,int);
//! struct Edges(Vec<Ed>);
//!
//! pub fn render_to<W:Writer>(output: &mut W) {
//! let edges = Edges(vec!((0,1), (0,2), (1,3), (2,3), (3,4), (4,4)));
//! dot::render(&edges, output).unwrap()
//! }
//!
//! impl<'a> dot::Labeller<'a, Nd, Ed> for Edges {
//! fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example1").unwrap() }
//!
//! fn node_id(&'a self, n: &Nd) -> dot::Id<'a> {
//! dot::Id::new(format!("N{}", *n)).unwrap()
//! }
//! }
//!
//! impl<'a> dot::GraphWalk<'a, Nd, Ed> for Edges {
//! fn nodes(&self) -> dot::Nodes<'a,Nd> {
//! // (assumes that |N| \approxeq |E|)
//! let &Edges(ref v) = self;
//! let mut nodes = Vec::with_capacity(v.len());
//! for &(s,t) in v.iter() {
//! nodes.push(s); nodes.push(t);
//! }
//! nodes.sort();
//! nodes.dedup();
//! nodes.into_cow()
//! }
//!
//! fn edges(&'a self) -> dot::Edges<'a,Ed> {
//! let &Edges(ref edges) = self;
//! edges.as_slice().into_cow()
//! }
//!
//! fn source(&self, e: &Ed) -> Nd { let &(s,_) = e; s }
//!
//! fn target(&self, e: &Ed) -> Nd { let &(_,t) = e; t }
//! }
//!
//! # pub fn main() { render_to(&mut Vec::new()) }
//! ```
//!
//! ```no_run
//! # pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
//! pub fn main() {
//! use std::io::File;
//! let mut f = File::create(&Path::new("example1.dot"));
//! render_to(&mut f)
//! }
//! ```
//!
//! Output from first example (in `example1.dot`):
//!
//! ```ignore
//! digraph example1 {
//! N0[label="N0"];
//! N1[label="N1"];
//! N2[label="N2"];
//! N3[label="N3"];
//! N4[label="N4"];
//! N0 -> N1[label=""];
//! N0 -> N2[label=""];
//! N1 -> N3[label=""];
//! N2 -> N3[label=""];
//! N3 -> N4[label=""];
//! N4 -> N4[label=""];
//! }
//! ```
//!
//! The second example illustrates using `node_label` and `edge_label` to
//! add labels to the nodes and edges in the rendered graph. The graph
//! here carries both `nodes` (the label text to use for rendering a
//! particular node), and `edges` (again a list of `(source,target)`
//! indices).
//!
//! This example also illustrates how to use a type (in this case the edge
//! type) that shares substructure with the graph: the edge type here is a
//! direct reference to the `(source,target)` pair stored in the graph's
//! internal vector (rather than passing around a copy of the pair
//! itself). Note that this implies that `fn edges(&'a self)` must
//! construct a fresh `Vec<&'a (uint,uint)>` from the `Vec<(uint,uint)>`
//! edges stored in `self`.
//!
//! Since both the set of nodes and the set of edges are always
//! constructed from scratch via iterators, we use the `collect()` method
//! from the `Iterator` trait to collect the nodes and edges into freshly
//! constructed growable `Vec` values (rather use the `into_cow`
//! from the `IntoCow` trait as was used in the first example
//! above).
//!
//! The output from this example renders four nodes that make up the
//! Hasse-diagram for the subsets of the set `{x, y}`. Each edge is
//! labelled with the &sube; character (specified using the HTML character
//! entity `&sube`).
//!
//! ```rust
//! use graphviz as dot;
//!
//! type Nd = uint;
//! type Ed<'a> = &'a (uint, uint);
//! struct Graph { nodes: Vec<&'static str>, edges: Vec<(uint,uint)> }
//!
//! pub fn render_to<W:Writer>(output: &mut W) {
//! let nodes = vec!("{x,y}","{x}","{y}","{}");
//! let edges = vec!((0,1), (0,2), (1,3), (2,3));
//! let graph = Graph { nodes: nodes, edges: edges };
//!
//! dot::render(&graph, output).unwrap()
//! }
//!
//! impl<'a> dot::Labeller<'a, Nd, Ed<'a>> for Graph {
//! fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example2").unwrap() }
//! fn node_id(&'a self, n: &Nd) -> dot::Id<'a> {
//! dot::Id::new(format!("N{}", n)).unwrap()
//! }
//! fn node_label<'a>(&'a self, n: &Nd) -> dot::LabelText<'a> {
//! dot::LabelStr(self.nodes[*n].as_slice().into_cow())
//! }
//! fn edge_label<'a>(&'a self, _: &Ed) -> dot::LabelText<'a> {
//! dot::LabelStr("&sube;".into_cow())
//! }
//! }
//!
//! impl<'a> dot::GraphWalk<'a, Nd, Ed<'a>> for Graph {
//! fn nodes(&self) -> dot::Nodes<'a,Nd> { range(0,self.nodes.len()).collect() }
//! fn edges(&'a self) -> dot::Edges<'a,Ed<'a>> { self.edges.iter().collect() }
//! fn source(&self, e: &Ed) -> Nd { let & &(s,_) = e; s }
//! fn target(&self, e: &Ed) -> Nd { let & &(_,t) = e; t }
//! }
//!
//! # pub fn main() { render_to(&mut Vec::new()) }
//! ```
//!
//! ```no_run
//! # pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
//! pub fn main() {
//! use std::io::File;
//! let mut f = File::create(&Path::new("example2.dot"));
//! render_to(&mut f)
//! }
//! ```
//!
//! The third example is similar to the second, except now each node and
//! edge now carries a reference to the string label for each node as well
//! as that node's index. (This is another illustration of how to share
//! structure with the graph itself, and why one might want to do so.)
//!
//! The output from this example is the same as the second example: the
//! Hasse-diagram for the subsets of the set `{x, y}`.
//!
//! ```rust
//! use graphviz as dot;
//!
//! type Nd<'a> = (uint, &'a str);
//! type Ed<'a> = (Nd<'a>, Nd<'a>);
//! struct Graph { nodes: Vec<&'static str>, edges: Vec<(uint,uint)> }
//!
//! pub fn render_to<W:Writer>(output: &mut W) {
//! let nodes = vec!("{x,y}","{x}","{y}","{}");
//! let edges = vec!((0,1), (0,2), (1,3), (2,3));
//! let graph = Graph { nodes: nodes, edges: edges };
//!
//! dot::render(&graph, output).unwrap()
//! }
//!
//! impl<'a> dot::Labeller<'a, Nd<'a>, Ed<'a>> for Graph {
//! fn graph_id(&'a self) -> dot::Id<'a> { dot::Id::new("example3").unwrap() }
//! fn node_id(&'a self, n: &Nd<'a>) -> dot::Id<'a> {
//! dot::Id::new(format!("N{}", n.val0())).unwrap()
//! }
//! fn node_label<'a>(&'a self, n: &Nd<'a>) -> dot::LabelText<'a> {
//! let &(i, _) = n;
//! dot::LabelStr(self.nodes[i].as_slice().into_cow())
//! }
//! fn edge_label<'a>(&'a self, _: &Ed<'a>) -> dot::LabelText<'a> {
//! dot::LabelStr("&sube;".into_cow())
//! }
//! }
//!
//! impl<'a> dot::GraphWalk<'a, Nd<'a>, Ed<'a>> for Graph {
//! fn nodes(&'a self) -> dot::Nodes<'a,Nd<'a>> {
//! self.nodes.iter().map(|s|s.as_slice()).enumerate().collect()
//! }
//! fn edges(&'a self) -> dot::Edges<'a,Ed<'a>> {
//! self.edges.iter()
//! .map(|&(i,j)|((i, self.nodes[i].as_slice()),
//! (j, self.nodes[j].as_slice())))
//! .collect()
//! }
//! fn source(&self, e: &Ed<'a>) -> Nd<'a> { let &(s,_) = e; s }
//! fn target(&self, e: &Ed<'a>) -> Nd<'a> { let &(_,t) = e; t }
//! }
//!
//! # pub fn main() { render_to(&mut Vec::new()) }
//! ```
//!
//! ```no_run
//! # pub fn render_to<W:Writer>(output: &mut W) { unimplemented!() }
//! pub fn main() {
//! use std::io::File;
//! let mut f = File::create(&Path::new("example3.dot"));
//! render_to(&mut f)
//! }
//! ```
//!
//! # References
//!
//! * [Graphviz](http://www.graphviz.org/)
//!
//! * [DOT language](http://www.graphviz.org/doc/info/lang.html)
#![crate_name = "graphviz"]
#![experimental]
......
......@@ -19,59 +19,57 @@
html_root_url = "http://doc.rust-lang.org/nightly/",
html_playground_url = "http://play.rust-lang.org/")]
/*!
* Bindings for the C standard library and other platform libraries
*
* **NOTE:** These are *architecture and libc* specific. On Linux, these
* bindings are only correct for glibc.
*
* This module contains bindings to the C standard library, organized into
* modules by their defining standard. Additionally, it contains some assorted
* platform-specific definitions. For convenience, most functions and types
* are reexported, so `use libc::*` will import the available C bindings as
* appropriate for the target platform. The exact set of functions available
* are platform specific.
*
* *Note:* Because these definitions are platform-specific, some may not appear
* in the generated documentation.
*
* We consider the following specs reasonably normative with respect to
* interoperating with the C standard library (libc/msvcrt):
*
* * ISO 9899:1990 ('C95', 'ANSI C', 'Standard C'), NA1, 1995.
* * ISO 9899:1999 ('C99' or 'C9x').
* * ISO 9945:1988 / IEEE 1003.1-1988 ('POSIX.1').
* * ISO 9945:2001 / IEEE 1003.1-2001 ('POSIX:2001', 'SUSv3').
* * ISO 9945:2008 / IEEE 1003.1-2008 ('POSIX:2008', 'SUSv4').
*
* Note that any reference to the 1996 revision of POSIX, or any revs between
* 1990 (when '88 was approved at ISO) and 2001 (when the next actual
* revision-revision happened), are merely additions of other chapters (1b and
* 1c) outside the core interfaces.
*
* Despite having several names each, these are *reasonably* coherent
* point-in-time, list-of-definition sorts of specs. You can get each under a
* variety of names but will wind up with the same definition in each case.
*
* See standards(7) in linux-manpages for more details.
*
* Our interface to these libraries is complicated by the non-universality of
* conformance to any of them. About the only thing universally supported is
* the first (C95), beyond that definitions quickly become absent on various
* platforms.
*
* We therefore wind up dividing our module-space up (mostly for the sake of
* sanity while editing, filling-in-details and eliminating duplication) into
* definitions common-to-all (held in modules named c95, c99, posix88, posix01
* and posix08) and definitions that appear only on *some* platforms (named
* 'extra'). This would be things like significant OSX foundation kit, or Windows
* library kernel32.dll, or various fancy glibc, Linux or BSD extensions.
*
* In addition to the per-platform 'extra' modules, we define a module of
* 'common BSD' libc routines that never quite made it into POSIX but show up
* in multiple derived systems. This is the 4.4BSD r2 / 1995 release, the final
* one from Berkeley after the lawsuits died down and the CSRG dissolved.
*/
//! Bindings for the C standard library and other platform libraries
//!
//! **NOTE:** These are *architecture and libc* specific. On Linux, these
//! bindings are only correct for glibc.
//!
//! This module contains bindings to the C standard library, organized into
//! modules by their defining standard. Additionally, it contains some assorted
//! platform-specific definitions. For convenience, most functions and types
//! are reexported, so `use libc::*` will import the available C bindings as
//! appropriate for the target platform. The exact set of functions available
//! are platform specific.
//!
//! *Note:* Because these definitions are platform-specific, some may not appear
//! in the generated documentation.
//!
//! We consider the following specs reasonably normative with respect to
//! interoperating with the C standard library (libc/msvcrt):
//!
//! * ISO 9899:1990 ('C95', 'ANSI C', 'Standard C'), NA1, 1995.
//! * ISO 9899:1999 ('C99' or 'C9x').
//! * ISO 9945:1988 / IEEE 1003.1-1988 ('POSIX.1').
//! * ISO 9945:2001 / IEEE 1003.1-2001 ('POSIX:2001', 'SUSv3').
//! * ISO 9945:2008 / IEEE 1003.1-2008 ('POSIX:2008', 'SUSv4').
//!
//! Note that any reference to the 1996 revision of POSIX, or any revs between
//! 1990 (when '88 was approved at ISO) and 2001 (when the next actual
//! revision-revision happened), are merely additions of other chapters (1b and
//! 1c) outside the core interfaces.
//!
//! Despite having several names each, these are *reasonably* coherent
//! point-in-time, list-of-definition sorts of specs. You can get each under a
//! variety of names but will wind up with the same definition in each case.
//!
//! See standards(7) in linux-manpages for more details.
//!
//! Our interface to these libraries is complicated by the non-universality of
//! conformance to any of them. About the only thing universally supported is
//! the first (C95), beyond that definitions quickly become absent on various
//! platforms.
//!
//! We therefore wind up dividing our module-space up (mostly for the sake of
//! sanity while editing, filling-in-details and eliminating duplication) into
//! definitions common-to-all (held in modules named c95, c99, posix88, posix01
//! and posix08) and definitions that appear only on *some* platforms (named
//! 'extra'). This would be things like significant OSX foundation kit, or Windows
//! library kernel32.dll, or various fancy glibc, Linux or BSD extensions.
//!
//! In addition to the per-platform 'extra' modules, we define a module of
//! 'common BSD' libc routines that never quite made it into POSIX but show up
//! in multiple derived systems. This is the 4.4BSD r2 / 1995 release, the final
//! one from Berkeley after the lawsuits died down and the CSRG dissolved.
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
......
......@@ -8,17 +8,14 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Sampling from random distributions.
This is a generalization of `Rand` to allow parameters to control the
exact properties of the generated values, e.g. the mean and standard
deviation of a normal distribution. The `Sample` trait is the most
general, and allows for generating values that change some state
internally. The `IndependentSample` trait is for generating values
that do not need to record state.
*/
//! Sampling from random distributions.
//!
//! This is a generalization of `Rand` to allow parameters to control the
//! exact properties of the generated values, e.g. the mean and standard
//! deviation of a normal distribution. The `Sample` trait is the most
//! general, and allows for generating values that change some state
//! internally. The `IndependentSample` trait is for generating values
//! that do not need to record state.
#![experimental]
......
......@@ -8,15 +8,11 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
The Rust compiler.
# Note
This API is completely unstable and subject to change.
*/
//! The Rust compiler.
//!
//! # Note
//!
//! This API is completely unstable and subject to change.
#![crate_name = "rustc"]
#![experimental]
......
......@@ -196,53 +196,38 @@ fn reserve_id_range(sess: &Session,
}
impl<'a, 'b, 'tcx> DecodeContext<'a, 'b, 'tcx> {
/// Translates an internal id, meaning a node id that is known to refer to some part of the
/// item currently being inlined, such as a local variable or argument. All naked node-ids
/// that appear in types have this property, since if something might refer to an external item
/// we would use a def-id to allow for the possibility that the item resides in another crate.
pub fn tr_id(&self, id: ast::NodeId) -> ast::NodeId {
/*!
* Translates an internal id, meaning a node id that is known
* to refer to some part of the item currently being inlined,
* such as a local variable or argument. All naked node-ids
* that appear in types have this property, since if something
* might refer to an external item we would use a def-id to
* allow for the possibility that the item resides in another
* crate.
*/
// from_id_range should be non-empty
assert!(!self.from_id_range.empty());
(id - self.from_id_range.min + self.to_id_range.min)
}
/// Translates an EXTERNAL def-id, converting the crate number from the one used in the encoded
/// data to the current crate numbers.. By external, I mean that it be translated to a
/// reference to the item in its original crate, as opposed to being translated to a reference
/// to the inlined version of the item. This is typically, but not always, what you want,
/// because most def-ids refer to external things like types or other fns that may or may not
/// be inlined. Note that even when the inlined function is referencing itself recursively, we
/// would want `tr_def_id` for that reference--- conceptually the function calls the original,
/// non-inlined version, and trans deals with linking that recursive call to the inlined copy.
///
/// However, there are a *few* cases where def-ids are used but we know that the thing being
/// referenced is in fact *internal* to the item being inlined. In those cases, you should use
/// `tr_intern_def_id()` below.
pub fn tr_def_id(&self, did: ast::DefId) -> ast::DefId {
/*!
* Translates an EXTERNAL def-id, converting the crate number
* from the one used in the encoded data to the current crate
* numbers.. By external, I mean that it be translated to a
* reference to the item in its original crate, as opposed to
* being translated to a reference to the inlined version of
* the item. This is typically, but not always, what you
* want, because most def-ids refer to external things like
* types or other fns that may or may not be inlined. Note
* that even when the inlined function is referencing itself
* recursively, we would want `tr_def_id` for that
* reference--- conceptually the function calls the original,
* non-inlined version, and trans deals with linking that
* recursive call to the inlined copy.
*
* However, there are a *few* cases where def-ids are used but
* we know that the thing being referenced is in fact *internal*
* to the item being inlined. In those cases, you should use
* `tr_intern_def_id()` below.
*/
decoder::translate_def_id(self.cdata, did)
}
pub fn tr_intern_def_id(&self, did: ast::DefId) -> ast::DefId {
/*!
* Translates an INTERNAL def-id, meaning a def-id that is
* known to refer to some part of the item currently being
* inlined. In that case, we want to convert the def-id to
* refer to the current crate and to the new, inlined node-id.
*/
/// Translates an INTERNAL def-id, meaning a def-id that is
/// known to refer to some part of the item currently being
/// inlined. In that case, we want to convert the def-id to
/// refer to the current crate and to the new, inlined node-id.
pub fn tr_intern_def_id(&self, did: ast::DefId) -> ast::DefId {
assert_eq!(did.krate, ast::LOCAL_CRATE);
ast::DefId { krate: ast::LOCAL_CRATE, node: self.tr_id(did.node) }
}
......@@ -1780,43 +1765,40 @@ fn read_unboxed_closure<'a, 'b>(&mut self, dcx: &DecodeContext<'a, 'b, 'tcx>)
}
}
/// Converts a def-id that appears in a type. The correct
/// translation will depend on what kind of def-id this is.
/// This is a subtle point: type definitions are not
/// inlined into the current crate, so if the def-id names
/// a nominal type or type alias, then it should be
/// translated to refer to the source crate.
///
/// However, *type parameters* are cloned along with the function
/// they are attached to. So we should translate those def-ids
/// to refer to the new, cloned copy of the type parameter.
/// We only see references to free type parameters in the body of
/// an inlined function. In such cases, we need the def-id to
/// be a local id so that the TypeContents code is able to lookup
/// the relevant info in the ty_param_defs table.
///
/// *Region parameters*, unfortunately, are another kettle of fish.
/// In such cases, def_id's can appear in types to distinguish
/// shadowed bound regions and so forth. It doesn't actually
/// matter so much what we do to these, since regions are erased
/// at trans time, but it's good to keep them consistent just in
/// case. We translate them with `tr_def_id()` which will map
/// the crate numbers back to the original source crate.
///
/// Unboxed closures are cloned along with the function being
/// inlined, and all side tables use interned node IDs, so we
/// translate their def IDs accordingly.
///
/// It'd be really nice to refactor the type repr to not include
/// def-ids so that all these distinctions were unnecessary.
fn convert_def_id(&mut self,
dcx: &DecodeContext,
source: tydecode::DefIdSource,
did: ast::DefId)
-> ast::DefId {
/*!
* Converts a def-id that appears in a type. The correct
* translation will depend on what kind of def-id this is.
* This is a subtle point: type definitions are not
* inlined into the current crate, so if the def-id names
* a nominal type or type alias, then it should be
* translated to refer to the source crate.
*
* However, *type parameters* are cloned along with the function
* they are attached to. So we should translate those def-ids
* to refer to the new, cloned copy of the type parameter.
* We only see references to free type parameters in the body of
* an inlined function. In such cases, we need the def-id to
* be a local id so that the TypeContents code is able to lookup
* the relevant info in the ty_param_defs table.
*
* *Region parameters*, unfortunately, are another kettle of fish.
* In such cases, def_id's can appear in types to distinguish
* shadowed bound regions and so forth. It doesn't actually
* matter so much what we do to these, since regions are erased
* at trans time, but it's good to keep them consistent just in
* case. We translate them with `tr_def_id()` which will map
* the crate numbers back to the original source crate.
*
* Unboxed closures are cloned along with the function being
* inlined, and all side tables use interned node IDs, so we
* translate their def IDs accordingly.
*
* It'd be really nice to refactor the type repr to not include
* def-ids so that all these distinctions were unnecessary.
*/
let r = match source {
NominalType | TypeWithId | RegionParameter => dcx.tr_def_id(did),
TypeParameter | UnboxedClosureSource => dcx.tr_intern_def_id(did)
......
......@@ -684,16 +684,13 @@ pub fn analyze_restrictions_on_use(&self,
return ret;
}
/// Reports an error if `expr` (which should be a path)
/// is using a moved/uninitialized value
fn check_if_path_is_moved(&self,
id: ast::NodeId,
span: Span,
use_kind: MovedValueUseKind,
lp: &Rc<LoanPath<'tcx>>) {
/*!
* Reports an error if `expr` (which should be a path)
* is using a moved/uninitialized value
*/
debug!("check_if_path_is_moved(id={}, use_kind={}, lp={})",
id, use_kind, lp.repr(self.bccx.tcx));
let base_lp = owned_ptr_base_path_rc(lp);
......@@ -708,30 +705,29 @@ fn check_if_path_is_moved(&self,
});
}
/// Reports an error if assigning to `lp` will use a
/// moved/uninitialized value. Mainly this is concerned with
/// detecting derefs of uninitialized pointers.
///
/// For example:
///
/// ```
/// let a: int;
/// a = 10; // ok, even though a is uninitialized
///
/// struct Point { x: uint, y: uint }
/// let p: Point;
/// p.x = 22; // ok, even though `p` is uninitialized
///
/// let p: ~Point;
/// (*p).x = 22; // not ok, p is uninitialized, can't deref
/// ```
fn check_if_assigned_path_is_moved(&self,
id: ast::NodeId,
span: Span,
use_kind: MovedValueUseKind,
lp: &Rc<LoanPath<'tcx>>)
{
/*!
* Reports an error if assigning to `lp` will use a
* moved/uninitialized value. Mainly this is concerned with
* detecting derefs of uninitialized pointers.
*
* For example:
*
* let a: int;
* a = 10; // ok, even though a is uninitialized
*
* struct Point { x: uint, y: uint }
* let p: Point;
* p.x = 22; // ok, even though `p` is uninitialized
*
* let p: ~Point;
* (*p).x = 22; // not ok, p is uninitialized, can't deref
*/
match lp.kind {
LpVar(_) | LpUpvar(_) => {
// assigning to `x` does not require that `x` is initialized
......
此差异已折叠。
......@@ -8,13 +8,10 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
//! Helper routines used for fragmenting structural paths due to moves for
//! tracking drop obligations. Please see the extensive comments in the
//! section "Structural fragments" in `doc.rs`.
Helper routines used for fragmenting structural paths due to moves for
tracking drop obligations. Please see the extensive comments in the
section "Structural fragments" in `doc.rs`.
*/
use self::Fragment::*;
use session::config;
......@@ -176,16 +173,12 @@ pub fn instrument_move_fragments<'tcx>(this: &MoveData<'tcx>,
instrument_all_paths("assigned_leaf_path", &fragments.assigned_leaf_paths);
}
/// Normalizes the fragment sets in `this`; i.e., removes duplicate entries, constructs the set of
/// parents, and constructs the left-over fragments.
///
/// Note: "left-over fragments" means paths that were not directly referenced in moves nor
/// assignments, but must nonetheless be tracked as potential drop obligations.
pub fn fixup_fragment_sets<'tcx>(this: &MoveData<'tcx>, tcx: &ty::ctxt<'tcx>) {
/*!
* Normalizes the fragment sets in `this`; i.e., removes
* duplicate entries, constructs the set of parents, and
* constructs the left-over fragments.
*
* Note: "left-over fragments" means paths that were not
* directly referenced in moves nor assignments, but must
* nonetheless be tracked as potential drop obligations.
*/
let mut fragments = this.fragments.borrow_mut();
......@@ -283,18 +276,14 @@ fn non_member(elem: MovePathIndex, set: &[MovePathIndex]) -> bool {
}
}
/// Adds all of the precisely-tracked siblings of `lp` as potential move paths of interest. For
/// example, if `lp` represents `s.x.j`, then adds moves paths for `s.x.i` and `s.x.k`, the
/// siblings of `s.x.j`.
fn add_fragment_siblings<'tcx>(this: &MoveData<'tcx>,
tcx: &ty::ctxt<'tcx>,
gathered_fragments: &mut Vec<Fragment>,
lp: Rc<LoanPath<'tcx>>,
origin_id: Option<ast::NodeId>) {
/*!
* Adds all of the precisely-tracked siblings of `lp` as
* potential move paths of interest. For example, if `lp`
* represents `s.x.j`, then adds moves paths for `s.x.i` and
* `s.x.k`, the siblings of `s.x.j`.
*/
match lp.kind {
LpVar(_) | LpUpvar(..) => {} // Local variables have no siblings.
......@@ -343,6 +332,8 @@ fn add_fragment_siblings<'tcx>(this: &MoveData<'tcx>,
}
}
/// We have determined that `origin_lp` destructures to LpExtend(parent, original_field_name).
/// Based on this, add move paths for all of the siblings of `origin_lp`.
fn add_fragment_siblings_for_extension<'tcx>(this: &MoveData<'tcx>,
tcx: &ty::ctxt<'tcx>,
gathered_fragments: &mut Vec<Fragment>,
......@@ -353,12 +344,6 @@ fn add_fragment_siblings_for_extension<'tcx>(this: &MoveData<'tcx>,
origin_id: Option<ast::NodeId>,
enum_variant_info: Option<(ast::DefId,
Rc<LoanPath<'tcx>>)>) {
/*!
* We have determined that `origin_lp` destructures to
* LpExtend(parent, original_field_name). Based on this,
* add move paths for all of the siblings of `origin_lp`.
*/
let parent_ty = parent_lp.to_type();
let add_fragment_sibling_local = |field_name| {
......@@ -454,6 +439,8 @@ fn add_fragment_siblings_for_extension<'tcx>(this: &MoveData<'tcx>,
}
}
/// Adds the single sibling `LpExtend(parent, new_field_name)` of `origin_lp` (the original
/// loan-path).
fn add_fragment_sibling_core<'tcx>(this: &MoveData<'tcx>,
tcx: &ty::ctxt<'tcx>,
gathered_fragments: &mut Vec<Fragment>,
......@@ -461,10 +448,6 @@ fn add_fragment_sibling_core<'tcx>(this: &MoveData<'tcx>,
mc: mc::MutabilityCategory,
new_field_name: mc::FieldName,
origin_lp: &Rc<LoanPath<'tcx>>) -> MovePathIndex {
/*!
* Adds the single sibling `LpExtend(parent, new_field_name)`
* of `origin_lp` (the original loan-path).
*/
let opt_variant_did = match parent.kind {
LpDowncast(_, variant_did) => Some(variant_did),
LpVar(..) | LpUpvar(..) | LpExtend(..) => None,
......
......@@ -8,9 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* Computes moves.
*/
//! Computes moves.
use middle::borrowck::*;
use middle::borrowck::LoanPathKind::*;
......
......@@ -8,10 +8,8 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* This module implements the check that the lifetime of a borrow
* does not exceed the lifetime of the value being borrowed.
*/
//! This module implements the check that the lifetime of a borrow
//! does not exceed the lifetime of the value being borrowed.
use middle::borrowck::*;
use middle::expr_use_visitor as euv;
......
......@@ -225,6 +225,9 @@ fn check_aliasability<'a, 'tcx>(bccx: &BorrowckCtxt<'a, 'tcx>,
impl<'a, 'tcx> GatherLoanCtxt<'a, 'tcx> {
pub fn tcx(&self) -> &'a ty::ctxt<'tcx> { self.bccx.tcx }
/// Guarantees that `addr_of(cmt)` will be valid for the duration of `static_scope_r`, or
/// reports an error. This may entail taking out loans, which will be added to the
/// `req_loan_map`.
fn guarantee_valid(&mut self,
borrow_id: ast::NodeId,
borrow_span: Span,
......@@ -232,12 +235,6 @@ fn guarantee_valid(&mut self,
req_kind: ty::BorrowKind,
loan_region: ty::Region,
cause: euv::LoanCause) {
/*!
* Guarantees that `addr_of(cmt)` will be valid for the duration of
* `static_scope_r`, or reports an error. This may entail taking
* out loans, which will be added to the `req_loan_map`.
*/
debug!("guarantee_valid(borrow_id={}, cmt={}, \
req_mutbl={}, loan_region={})",
borrow_id,
......
......@@ -8,9 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* Computes the restrictions that result from a borrow.
*/
//! Computes the restrictions that result from a borrow.
pub use self::RestrictionResult::*;
......
......@@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! See doc.rs for a thorough explanation of the borrow checker */
//! See doc.rs for a thorough explanation of the borrow checker
#![allow(non_camel_case_types)]
......
......@@ -8,12 +8,8 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Data structures used for tracking moves. Please see the extensive
comments in the section "Moves and initialization" in `doc.rs`.
*/
//! Data structures used for tracking moves. Please see the extensive
//! comments in the section "Moves and initialization" in `doc.rs`.
pub use self::MoveKind::*;
......@@ -297,15 +293,11 @@ fn is_var_path(&self, index: MovePathIndex) -> bool {
self.path_parent(index) == InvalidMovePathIndex
}
/// Returns the existing move path index for `lp`, if any, and otherwise adds a new index for
/// `lp` and any of its base paths that do not yet have an index.
pub fn move_path(&self,
tcx: &ty::ctxt<'tcx>,
lp: Rc<LoanPath<'tcx>>) -> MovePathIndex {
/*!
* Returns the existing move path index for `lp`, if any,
* and otherwise adds a new index for `lp` and any of its
* base paths that do not yet have an index.
*/
match self.path_map.borrow().get(&lp) {
Some(&index) => {
return index;
......@@ -370,13 +362,10 @@ fn existing_base_paths(&self, lp: &Rc<LoanPath<'tcx>>)
result
}
/// Adds any existing move path indices for `lp` and any base paths of `lp` to `result`, but
/// does not add new move paths
fn add_existing_base_paths(&self, lp: &Rc<LoanPath<'tcx>>,
result: &mut Vec<MovePathIndex>) {
/*!
* Adds any existing move path indices for `lp` and any base
* paths of `lp` to `result`, but does not add new move paths
*/
match self.path_map.borrow().get(lp).cloned() {
Some(index) => {
self.each_base_path(index, |p| {
......@@ -397,16 +386,12 @@ fn add_existing_base_paths(&self, lp: &Rc<LoanPath<'tcx>>,
}
/// Adds a new move entry for a move of `lp` that occurs at location `id` with kind `kind`.
pub fn add_move(&self,
tcx: &ty::ctxt<'tcx>,
lp: Rc<LoanPath<'tcx>>,
id: ast::NodeId,
kind: MoveKind) {
/*!
* Adds a new move entry for a move of `lp` that occurs at
* location `id` with kind `kind`.
*/
debug!("add_move(lp={}, id={}, kind={})",
lp.repr(tcx),
id,
......@@ -428,6 +413,8 @@ pub fn add_move(&self,
});
}
/// Adds a new record for an assignment to `lp` that occurs at location `id` with the given
/// `span`.
pub fn add_assignment(&self,
tcx: &ty::ctxt<'tcx>,
lp: Rc<LoanPath<'tcx>>,
......@@ -435,11 +422,6 @@ pub fn add_assignment(&self,
span: Span,
assignee_id: ast::NodeId,
mode: euv::MutateMode) {
/*!
* Adds a new record for an assignment to `lp` that occurs at
* location `id` with the given `span`.
*/
debug!("add_assignment(lp={}, assign_id={}, assignee_id={}",
lp.repr(tcx), assign_id, assignee_id);
......@@ -473,18 +455,16 @@ pub fn add_assignment(&self,
}
}
/// Adds a new record for a match of `base_lp`, downcast to
/// variant `lp`, that occurs at location `pattern_id`. (One
/// should be able to recover the span info from the
/// `pattern_id` and the ast_map, I think.)
pub fn add_variant_match(&self,
tcx: &ty::ctxt<'tcx>,
lp: Rc<LoanPath<'tcx>>,
pattern_id: ast::NodeId,
base_lp: Rc<LoanPath<'tcx>>,
mode: euv::MatchMode) {
/*!
* Adds a new record for a match of `base_lp`, downcast to
* variant `lp`, that occurs at location `pattern_id`. (One
* should be able to recover the span info from the
* `pattern_id` and the ast_map, I think.)
*/
debug!("add_variant_match(lp={}, pattern_id={})",
lp.repr(tcx), pattern_id);
......@@ -507,18 +487,15 @@ fn fixup_fragment_sets(&self, tcx: &ty::ctxt<'tcx>) {
fragments::fixup_fragment_sets(self, tcx)
}
/// Adds the gen/kills for the various moves and
/// assignments into the provided data flow contexts.
/// Moves are generated by moves and killed by assignments and
/// scoping. Assignments are generated by assignment to variables and
/// killed by scoping. See `doc.rs` for more details.
fn add_gen_kills(&self,
tcx: &ty::ctxt<'tcx>,
dfcx_moves: &mut MoveDataFlow,
dfcx_assign: &mut AssignDataFlow) {
/*!
* Adds the gen/kills for the various moves and
* assignments into the provided data flow contexts.
* Moves are generated by moves and killed by assignments and
* scoping. Assignments are generated by assignment to variables and
* killed by scoping. See `doc.rs` for more details.
*/
for (i, the_move) in self.moves.borrow().iter().enumerate() {
dfcx_moves.add_gen(the_move.id, i);
}
......@@ -695,18 +672,14 @@ pub fn kind_of_move_of_path(&self,
ret
}
/// Iterates through each move of `loan_path` (or some base path of `loan_path`) that *may*
/// have occurred on entry to `id` without an intervening assignment. In other words, any moves
/// that would invalidate a reference to `loan_path` at location `id`.
pub fn each_move_of(&self,
id: ast::NodeId,
loan_path: &Rc<LoanPath<'tcx>>,
f: |&Move, &LoanPath<'tcx>| -> bool)
-> bool {
/*!
* Iterates through each move of `loan_path` (or some base path
* of `loan_path`) that *may* have occurred on entry to `id` without
* an intervening assignment. In other words, any moves that
* would invalidate a reference to `loan_path` at location `id`.
*/
// Bad scenarios:
//
// 1. Move of `a.b.c`, use of `a.b.c`
......@@ -755,17 +728,13 @@ pub fn each_move_of(&self,
})
}
/// Iterates through every assignment to `loan_path` that may have occurred on entry to `id`.
/// `loan_path` must be a single variable.
pub fn each_assignment_of(&self,
id: ast::NodeId,
loan_path: &Rc<LoanPath<'tcx>>,
f: |&Assignment| -> bool)
-> bool {
/*!
* Iterates through every assignment to `loan_path` that
* may have occurred on entry to `id`. `loan_path` must be
* a single variable.
*/
let loan_path_index = {
match self.move_data.existing_move_path(loan_path) {
Some(i) => i,
......
......@@ -8,12 +8,8 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
Module that constructs a control-flow graph representing an item.
Uses `Graph` as the underlying representation.
*/
//! Module that constructs a control-flow graph representing an item.
//! Uses `Graph` as the underlying representation.
use middle::graph;
use middle::ty;
......
......@@ -9,12 +9,10 @@
// except according to those terms.
/*!
* A module for propagating forward dataflow information. The analysis
* assumes that the items to be propagated can be represented as bits
* and thus uses bitvectors. Your job is simply to specify the so-called
* GEN and KILL bits for each expression.
*/
//! A module for propagating forward dataflow information. The analysis
//! assumes that the items to be propagated can be represented as bits
//! and thus uses bitvectors. Your job is simply to specify the so-called
//! GEN and KILL bits for each expression.
pub use self::EntryOrExit::*;
......
......@@ -8,11 +8,9 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* A different sort of visitor for walking fn bodies. Unlike the
* normal visitor, which just walks the entire body in one shot, the
* `ExprUseVisitor` determines how expressions are being used.
*/
//! A different sort of visitor for walking fn bodies. Unlike the
//! normal visitor, which just walks the entire body in one shot, the
//! `ExprUseVisitor` determines how expressions are being used.
pub use self::MutateMode::*;
pub use self::LoanCause::*;
......@@ -716,12 +714,9 @@ fn walk_local(&mut self, local: &ast::Local) {
}
}
/// Indicates that the value of `blk` will be consumed, meaning either copied or moved
/// depending on its type.
fn walk_block(&mut self, blk: &ast::Block) {
/*!
* Indicates that the value of `blk` will be consumed,
* meaning either copied or moved depending on its type.
*/
debug!("walk_block(blk.id={})", blk.id);
for stmt in blk.stmts.iter() {
......@@ -821,16 +816,12 @@ fn walk_adjustment(&mut self, expr: &ast::Expr) {
}
}
/// Autoderefs for overloaded Deref calls in fact reference their receiver. That is, if we have
/// `(*x)` where `x` is of type `Rc<T>`, then this in fact is equivalent to `x.deref()`. Since
/// `deref()` is declared with `&self`, this is an autoref of `x`.
fn walk_autoderefs(&mut self,
expr: &ast::Expr,
autoderefs: uint) {
/*!
* Autoderefs for overloaded Deref calls in fact reference
* their receiver. That is, if we have `(*x)` where `x` is of
* type `Rc<T>`, then this in fact is equivalent to
* `x.deref()`. Since `deref()` is declared with `&self`, this
* is an autoref of `x`.
*/
debug!("walk_autoderefs expr={} autoderefs={}", expr.repr(self.tcx()), autoderefs);
for i in range(0, autoderefs) {
......
......@@ -33,26 +33,20 @@ pub enum SimplifiedType {
ParameterSimplifiedType,
}
/// Tries to simplify a type by dropping type parameters, deref'ing away any reference types, etc.
/// The idea is to get something simple that we can use to quickly decide if two types could unify
/// during method lookup.
///
/// If `can_simplify_params` is false, then we will fail to simplify type parameters entirely. This
/// is useful when those type parameters would be instantiated with fresh type variables, since
/// then we can't say much about whether two types would unify. Put another way,
/// `can_simplify_params` should be true if type parameters appear free in `ty` and `false` if they
/// are to be considered bound.
pub fn simplify_type(tcx: &ty::ctxt,
ty: Ty,
can_simplify_params: bool)
-> Option<SimplifiedType>
{
/*!
* Tries to simplify a type by dropping type parameters, deref'ing
* away any reference types, etc. The idea is to get something
* simple that we can use to quickly decide if two types could
* unify during method lookup.
*
* If `can_simplify_params` is false, then we will fail to
* simplify type parameters entirely. This is useful when those
* type parameters would be instantiated with fresh type
* variables, since then we can't say much about whether two types
* would unify. Put another way, `can_simplify_params` should be
* true if type parameters appear free in `ty` and `false` if they
* are to be considered bound.
*/
match ty.sty {
ty::ty_bool => Some(BoolSimplifiedType),
ty::ty_char => Some(CharSimplifiedType),
......
......@@ -8,31 +8,27 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
A graph module for use in dataflow, region resolution, and elsewhere.
# Interface details
You customize the graph by specifying a "node data" type `N` and an
"edge data" type `E`. You can then later gain access (mutable or
immutable) to these "user-data" bits. Currently, you can only add
nodes or edges to the graph. You cannot remove or modify them once
added. This could be changed if we have a need.
# Implementation details
The main tricky thing about this code is the way that edges are
stored. The edges are stored in a central array, but they are also
threaded onto two linked lists for each node, one for incoming edges
and one for outgoing edges. Note that every edge is a member of some
incoming list and some outgoing list. Basically you can load the
first index of the linked list from the node data structures (the
field `first_edge`) and then, for each edge, load the next index from
the field `next_edge`). Each of those fields is an array that should
be indexed by the direction (see the type `Direction`).
*/
//! A graph module for use in dataflow, region resolution, and elsewhere.
//!
//! # Interface details
//!
//! You customize the graph by specifying a "node data" type `N` and an
//! "edge data" type `E`. You can then later gain access (mutable or
//! immutable) to these "user-data" bits. Currently, you can only add
//! nodes or edges to the graph. You cannot remove or modify them once
//! added. This could be changed if we have a need.
//!
//! # Implementation details
//!
//! The main tricky thing about this code is the way that edges are
//! stored. The edges are stored in a central array, but they are also
//! threaded onto two linked lists for each node, one for incoming edges
//! and one for outgoing edges. Note that every edge is a member of some
//! incoming list and some outgoing list. Basically you can load the
//! first index of the linked list from the node data structures (the
//! field `first_edge`) and then, for each edge, load the next index from
//! the field `next_edge`). Each of those fields is an array that should
//! be indexed by the direction (see the type `Direction`).
#![allow(dead_code)] // still WIP
......
......@@ -8,105 +8,103 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* A classic liveness analysis based on dataflow over the AST. Computes,
* for each local variable in a function, whether that variable is live
* at a given point. Program execution points are identified by their
* id.
*
* # Basic idea
*
* The basic model is that each local variable is assigned an index. We
* represent sets of local variables using a vector indexed by this
* index. The value in the vector is either 0, indicating the variable
* is dead, or the id of an expression that uses the variable.
*
* We conceptually walk over the AST in reverse execution order. If we
* find a use of a variable, we add it to the set of live variables. If
* we find an assignment to a variable, we remove it from the set of live
* variables. When we have to merge two flows, we take the union of
* those two flows---if the variable is live on both paths, we simply
* pick one id. In the event of loops, we continue doing this until a
* fixed point is reached.
*
* ## Checking initialization
*
* At the function entry point, all variables must be dead. If this is
* not the case, we can report an error using the id found in the set of
* live variables, which identifies a use of the variable which is not
* dominated by an assignment.
*
* ## Checking moves
*
* After each explicit move, the variable must be dead.
*
* ## Computing last uses
*
* Any use of the variable where the variable is dead afterwards is a
* last use.
*
* # Implementation details
*
* The actual implementation contains two (nested) walks over the AST.
* The outer walk has the job of building up the ir_maps instance for the
* enclosing function. On the way down the tree, it identifies those AST
* nodes and variable IDs that will be needed for the liveness analysis
* and assigns them contiguous IDs. The liveness id for an AST node is
* called a `live_node` (it's a newtype'd uint) and the id for a variable
* is called a `variable` (another newtype'd uint).
*
* On the way back up the tree, as we are about to exit from a function
* declaration we allocate a `liveness` instance. Now that we know
* precisely how many nodes and variables we need, we can allocate all
* the various arrays that we will need to precisely the right size. We then
* perform the actual propagation on the `liveness` instance.
*
* This propagation is encoded in the various `propagate_through_*()`
* methods. It effectively does a reverse walk of the AST; whenever we
* reach a loop node, we iterate until a fixed point is reached.
*
* ## The `Users` struct
*
* At each live node `N`, we track three pieces of information for each
* variable `V` (these are encapsulated in the `Users` struct):
*
* - `reader`: the `LiveNode` ID of some node which will read the value
* that `V` holds on entry to `N`. Formally: a node `M` such
* that there exists a path `P` from `N` to `M` where `P` does not
* write `V`. If the `reader` is `invalid_node()`, then the current
* value will never be read (the variable is dead, essentially).
*
* - `writer`: the `LiveNode` ID of some node which will write the
* variable `V` and which is reachable from `N`. Formally: a node `M`
* such that there exists a path `P` from `N` to `M` and `M` writes
* `V`. If the `writer` is `invalid_node()`, then there is no writer
* of `V` that follows `N`.
*
* - `used`: a boolean value indicating whether `V` is *used*. We
* distinguish a *read* from a *use* in that a *use* is some read that
* is not just used to generate a new value. For example, `x += 1` is
* a read but not a use. This is used to generate better warnings.
*
* ## Special Variables
*
* We generate various special variables for various, well, special purposes.
* These are described in the `specials` struct:
*
* - `exit_ln`: a live node that is generated to represent every 'exit' from
* the function, whether it be by explicit return, panic, or other means.
*
* - `fallthrough_ln`: a live node that represents a fallthrough
*
* - `no_ret_var`: a synthetic variable that is only 'read' from, the
* fallthrough node. This allows us to detect functions where we fail
* to return explicitly.
* - `clean_exit_var`: a synthetic variable that is only 'read' from the
* fallthrough node. It is only live if the function could converge
* via means other than an explicit `return` expression. That is, it is
* only dead if the end of the function's block can never be reached.
* It is the responsibility of typeck to ensure that there are no
* `return` expressions in a function declared as diverging.
*/
//! A classic liveness analysis based on dataflow over the AST. Computes,
//! for each local variable in a function, whether that variable is live
//! at a given point. Program execution points are identified by their
//! id.
//!
//! # Basic idea
//!
//! The basic model is that each local variable is assigned an index. We
//! represent sets of local variables using a vector indexed by this
//! index. The value in the vector is either 0, indicating the variable
//! is dead, or the id of an expression that uses the variable.
//!
//! We conceptually walk over the AST in reverse execution order. If we
//! find a use of a variable, we add it to the set of live variables. If
//! we find an assignment to a variable, we remove it from the set of live
//! variables. When we have to merge two flows, we take the union of
//! those two flows---if the variable is live on both paths, we simply
//! pick one id. In the event of loops, we continue doing this until a
//! fixed point is reached.
//!
//! ## Checking initialization
//!
//! At the function entry point, all variables must be dead. If this is
//! not the case, we can report an error using the id found in the set of
//! live variables, which identifies a use of the variable which is not
//! dominated by an assignment.
//!
//! ## Checking moves
//!
//! After each explicit move, the variable must be dead.
//!
//! ## Computing last uses
//!
//! Any use of the variable where the variable is dead afterwards is a
//! last use.
//!
//! # Implementation details
//!
//! The actual implementation contains two (nested) walks over the AST.
//! The outer walk has the job of building up the ir_maps instance for the
//! enclosing function. On the way down the tree, it identifies those AST
//! nodes and variable IDs that will be needed for the liveness analysis
//! and assigns them contiguous IDs. The liveness id for an AST node is
//! called a `live_node` (it's a newtype'd uint) and the id for a variable
//! is called a `variable` (another newtype'd uint).
//!
//! On the way back up the tree, as we are about to exit from a function
//! declaration we allocate a `liveness` instance. Now that we know
//! precisely how many nodes and variables we need, we can allocate all
//! the various arrays that we will need to precisely the right size. We then
//! perform the actual propagation on the `liveness` instance.
//!
//! This propagation is encoded in the various `propagate_through_*()`
//! methods. It effectively does a reverse walk of the AST; whenever we
//! reach a loop node, we iterate until a fixed point is reached.
//!
//! ## The `Users` struct
//!
//! At each live node `N`, we track three pieces of information for each
//! variable `V` (these are encapsulated in the `Users` struct):
//!
//! - `reader`: the `LiveNode` ID of some node which will read the value
//! that `V` holds on entry to `N`. Formally: a node `M` such
//! that there exists a path `P` from `N` to `M` where `P` does not
//! write `V`. If the `reader` is `invalid_node()`, then the current
//! value will never be read (the variable is dead, essentially).
//!
//! - `writer`: the `LiveNode` ID of some node which will write the
//! variable `V` and which is reachable from `N`. Formally: a node `M`
//! such that there exists a path `P` from `N` to `M` and `M` writes
//! `V`. If the `writer` is `invalid_node()`, then there is no writer
//! of `V` that follows `N`.
//!
//! - `used`: a boolean value indicating whether `V` is *used*. We
//! distinguish a *read* from a *use* in that a *use* is some read that
//! is not just used to generate a new value. For example, `x += 1` is
//! a read but not a use. This is used to generate better warnings.
//!
//! ## Special Variables
//!
//! We generate various special variables for various, well, special purposes.
//! These are described in the `specials` struct:
//!
//! - `exit_ln`: a live node that is generated to represent every 'exit' from
//! the function, whether it be by explicit return, panic, or other means.
//!
//! - `fallthrough_ln`: a live node that represents a fallthrough
//!
//! - `no_ret_var`: a synthetic variable that is only 'read' from, the
//! fallthrough node. This allows us to detect functions where we fail
//! to return explicitly.
//! - `clean_exit_var`: a synthetic variable that is only 'read' from the
//! fallthrough node. It is only live if the function could converge
//! via means other than an explicit `return` expression. That is, it is
//! only dead if the end of the function's block can never be reached.
//! It is the responsibility of typeck to ensure that there are no
//! `return` expressions in a function declared as diverging.
use self::LoopKind::*;
use self::LiveNodeKind::*;
use self::VarKind::*;
......
......@@ -8,57 +8,55 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* # Categorization
*
* The job of the categorization module is to analyze an expression to
* determine what kind of memory is used in evaluating it (for example,
* where dereferences occur and what kind of pointer is dereferenced;
* whether the memory is mutable; etc)
*
* Categorization effectively transforms all of our expressions into
* expressions of the following forms (the actual enum has many more
* possibilities, naturally, but they are all variants of these base
* forms):
*
* E = rvalue // some computed rvalue
* | x // address of a local variable or argument
* | *E // deref of a ptr
* | E.comp // access to an interior component
*
* Imagine a routine ToAddr(Expr) that evaluates an expression and returns an
* address where the result is to be found. If Expr is an lvalue, then this
* is the address of the lvalue. If Expr is an rvalue, this is the address of
* some temporary spot in memory where the result is stored.
*
* Now, cat_expr() classifies the expression Expr and the address A=ToAddr(Expr)
* as follows:
*
* - cat: what kind of expression was this? This is a subset of the
* full expression forms which only includes those that we care about
* for the purpose of the analysis.
* - mutbl: mutability of the address A
* - ty: the type of data found at the address A
*
* The resulting categorization tree differs somewhat from the expressions
* themselves. For example, auto-derefs are explicit. Also, an index a[b] is
* decomposed into two operations: a dereference to reach the array data and
* then an index to jump forward to the relevant item.
*
* ## By-reference upvars
*
* One part of the translation which may be non-obvious is that we translate
* closure upvars into the dereference of a borrowed pointer; this more closely
* resembles the runtime translation. So, for example, if we had:
*
* let mut x = 3;
* let y = 5;
* let inc = || x += y;
*
* Then when we categorize `x` (*within* the closure) we would yield a
* result of `*x'`, effectively, where `x'` is a `cat_upvar` reference
* tied to `x`. The type of `x'` will be a borrowed pointer.
*/
//! # Categorization
//!
//! The job of the categorization module is to analyze an expression to
//! determine what kind of memory is used in evaluating it (for example,
//! where dereferences occur and what kind of pointer is dereferenced;
//! whether the memory is mutable; etc)
//!
//! Categorization effectively transforms all of our expressions into
//! expressions of the following forms (the actual enum has many more
//! possibilities, naturally, but they are all variants of these base
//! forms):
//!
//! E = rvalue // some computed rvalue
//! | x // address of a local variable or argument
//! | *E // deref of a ptr
//! | E.comp // access to an interior component
//!
//! Imagine a routine ToAddr(Expr) that evaluates an expression and returns an
//! address where the result is to be found. If Expr is an lvalue, then this
//! is the address of the lvalue. If Expr is an rvalue, this is the address of
//! some temporary spot in memory where the result is stored.
//!
//! Now, cat_expr() classifies the expression Expr and the address A=ToAddr(Expr)
//! as follows:
//!
//! - cat: what kind of expression was this? This is a subset of the
//! full expression forms which only includes those that we care about
//! for the purpose of the analysis.
//! - mutbl: mutability of the address A
//! - ty: the type of data found at the address A
//!
//! The resulting categorization tree differs somewhat from the expressions
//! themselves. For example, auto-derefs are explicit. Also, an index a[b] is
//! decomposed into two operations: a dereference to reach the array data and
//! then an index to jump forward to the relevant item.
//!
//! ## By-reference upvars
//!
//! One part of the translation which may be non-obvious is that we translate
//! closure upvars into the dereference of a borrowed pointer; this more closely
//! resembles the runtime translation. So, for example, if we had:
//!
//! let mut x = 3;
//! let y = 5;
//! let inc = || x += y;
//!
//! Then when we categorize `x` (*within* the closure) we would yield a
//! result of `*x'`, effectively, where `x'` is a `cat_upvar` reference
//! tied to `x`. The type of `x'` will be a borrowed pointer.
#![allow(non_camel_case_types)]
......@@ -1058,20 +1056,17 @@ fn deref_vec<N:ast_node>(&self,
}
}
/// Given a pattern P like: `[_, ..Q, _]`, where `vec_cmt` is the cmt for `P`, `slice_pat` is
/// the pattern `Q`, returns:
///
/// * a cmt for `Q`
/// * the mutability and region of the slice `Q`
///
/// These last two bits of info happen to be things that borrowck needs.
pub fn cat_slice_pattern(&self,
vec_cmt: cmt<'tcx>,
slice_pat: &ast::Pat)
-> McResult<(cmt<'tcx>, ast::Mutability, ty::Region)> {
/*!
* Given a pattern P like: `[_, ..Q, _]`, where `vec_cmt` is
* the cmt for `P`, `slice_pat` is the pattern `Q`, returns:
* - a cmt for `Q`
* - the mutability and region of the slice `Q`
*
* These last two bits of info happen to be things that
* borrowck needs.
*/
let slice_ty = if_ok!(self.node_ty(slice_pat.id));
let (slice_mutbl, slice_r) = vec_slice_info(self.tcx(),
slice_pat,
......@@ -1079,17 +1074,13 @@ pub fn cat_slice_pattern(&self,
let cmt_slice = self.cat_index(slice_pat, self.deref_vec(slice_pat, vec_cmt));
return Ok((cmt_slice, slice_mutbl, slice_r));
/// In a pattern like [a, b, ..c], normally `c` has slice type, but if you have [a, b,
/// ..ref c], then the type of `ref c` will be `&&[]`, so to extract the slice details we
/// have to recurse through rptrs.
fn vec_slice_info(tcx: &ty::ctxt,
pat: &ast::Pat,
slice_ty: Ty)
-> (ast::Mutability, ty::Region) {
/*!
* In a pattern like [a, b, ..c], normally `c` has slice type,
* but if you have [a, b, ..ref c], then the type of `ref c`
* will be `&&[]`, so to extract the slice details we have
* to recurse through rptrs.
*/
match slice_ty.sty {
ty::ty_rptr(r, ref mt) => match mt.ty.sty {
ty::ty_vec(_, None) => (mt.mutbl, r),
......@@ -1428,13 +1419,9 @@ pub fn guarantor(&self) -> cmt<'tcx> {
}
}
/// Returns `Some(_)` if this lvalue represents a freely aliasable pointer type.
pub fn freely_aliasable(&self, ctxt: &ty::ctxt<'tcx>)
-> Option<AliasableReason> {
/*!
* Returns `Some(_)` if this lvalue represents a freely aliasable
* pointer type.
*/
// Maybe non-obvious: copied upvars can only be considered
// non-aliasable in once closures, since any other kind can be
// aliased and eventually recused.
......
......@@ -8,18 +8,13 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
This file actually contains two passes related to regions. The first
pass builds up the `scope_map`, which describes the parent links in
the region hierarchy. The second pass infers which types must be
region parameterized.
Most of the documentation on regions can be found in
`middle/typeck/infer/region_inference.rs`
*/
//! This file actually contains two passes related to regions. The first
//! pass builds up the `scope_map`, which describes the parent links in
//! the region hierarchy. The second pass infers which types must be
//! region parameterized.
//!
//! Most of the documentation on regions can be found in
//! `middle/typeck/infer/region_inference.rs`
use session::Session;
use middle::ty::{mod, Ty, FreeRegion};
......@@ -171,14 +166,10 @@ pub fn record_rvalue_scope(&self, var: ast::NodeId, lifetime: CodeExtent) {
self.rvalue_scopes.borrow_mut().insert(var, lifetime);
}
/// Records that a scope is a TERMINATING SCOPE. Whenever we create automatic temporaries --
/// e.g. by an expression like `a().f` -- they will be freed within the innermost terminating
/// scope.
pub fn mark_as_terminating_scope(&self, scope_id: CodeExtent) {
/*!
* Records that a scope is a TERMINATING SCOPE. Whenever we
* create automatic temporaries -- e.g. by an
* expression like `a().f` -- they will be freed within
* the innermost terminating scope.
*/
debug!("record_terminating_scope(scope_id={})", scope_id);
self.terminating_scopes.borrow_mut().insert(scope_id);
}
......@@ -197,10 +188,8 @@ pub fn encl_scope(&self, id: CodeExtent) -> CodeExtent {
}
}
/// Returns the lifetime of the local variable `var_id`
pub fn var_scope(&self, var_id: ast::NodeId) -> CodeExtent {
/*!
* Returns the lifetime of the local variable `var_id`
*/
match self.var_map.borrow().get(&var_id) {
Some(&r) => r,
None => { panic!("no enclosing scope for id {}", var_id); }
......@@ -257,15 +246,12 @@ pub fn scopes_intersect(&self, scope1: CodeExtent, scope2: CodeExtent)
self.is_subscope_of(scope2, scope1)
}
/// Returns true if `subscope` is equal to or is lexically nested inside `superscope` and false
/// otherwise.
pub fn is_subscope_of(&self,
subscope: CodeExtent,
superscope: CodeExtent)
-> bool {
/*!
* Returns true if `subscope` is equal to or is lexically
* nested inside `superscope` and false otherwise.
*/
let mut s = subscope;
while superscope != s {
match self.scope_map.borrow().get(&s) {
......@@ -285,27 +271,20 @@ pub fn is_subscope_of(&self,
return true;
}
/// Determines whether two free regions have a subregion relationship
/// by walking the graph encoded in `free_region_map`. Note that
/// it is possible that `sub != sup` and `sub <= sup` and `sup <= sub`
/// (that is, the user can give two different names to the same lifetime).
pub fn sub_free_region(&self, sub: FreeRegion, sup: FreeRegion) -> bool {
/*!
* Determines whether two free regions have a subregion relationship
* by walking the graph encoded in `free_region_map`. Note that
* it is possible that `sub != sup` and `sub <= sup` and `sup <= sub`
* (that is, the user can give two different names to the same lifetime).
*/
can_reach(&*self.free_region_map.borrow(), sub, sup)
}
/// Determines whether one region is a subregion of another. This is intended to run *after
/// inference* and sadly the logic is somewhat duplicated with the code in infer.rs.
pub fn is_subregion_of(&self,
sub_region: ty::Region,
super_region: ty::Region)
-> bool {
/*!
* Determines whether one region is a subregion of another. This is
* intended to run *after inference* and sadly the logic is somewhat
* duplicated with the code in infer.rs.
*/
debug!("is_subregion_of(sub_region={}, super_region={})",
sub_region, super_region);
......@@ -345,16 +324,12 @@ pub fn is_subregion_of(&self,
}
}
/// Finds the nearest common ancestor (if any) of two scopes. That is, finds the smallest
/// scope which is greater than or equal to both `scope_a` and `scope_b`.
pub fn nearest_common_ancestor(&self,
scope_a: CodeExtent,
scope_b: CodeExtent)
-> Option<CodeExtent> {
/*!
* Finds the nearest common ancestor (if any) of two scopes. That
* is, finds the smallest scope which is greater than or equal to
* both `scope_a` and `scope_b`.
*/
if scope_a == scope_b { return Some(scope_a); }
let a_ancestors = ancestors_of(self, scope_a);
......@@ -681,18 +656,15 @@ fn resolve_local(visitor: &mut RegionResolutionVisitor, local: &ast::Local) {
visit::walk_local(visitor, local);
/// True if `pat` match the `P&` nonterminal:
///
/// P& = ref X
/// | StructName { ..., P&, ... }
/// | VariantName(..., P&, ...)
/// | [ ..., P&, ... ]
/// | ( ..., P&, ... )
/// | box P&
fn is_binding_pat(pat: &ast::Pat) -> bool {
/*!
* True if `pat` match the `P&` nonterminal:
*
* P& = ref X
* | StructName { ..., P&, ... }
* | VariantName(..., P&, ...)
* | [ ..., P&, ... ]
* | ( ..., P&, ... )
* | box P&
*/
match pat.node {
ast::PatIdent(ast::BindByRef(_), _, _) => true,
......@@ -719,35 +691,27 @@ fn is_binding_pat(pat: &ast::Pat) -> bool {
}
}
/// True if `ty` is a borrowed pointer type like `&int` or `&[...]`.
fn is_borrowed_ty(ty: &ast::Ty) -> bool {
/*!
* True if `ty` is a borrowed pointer type
* like `&int` or `&[...]`.
*/
match ty.node {
ast::TyRptr(..) => true,
_ => false
}
}
/// If `expr` matches the `E&` grammar, then records an extended rvalue scope as appropriate:
///
/// E& = & ET
/// | StructName { ..., f: E&, ... }
/// | [ ..., E&, ... ]
/// | ( ..., E&, ... )
/// | {...; E&}
/// | box E&
/// | E& as ...
/// | ( E& )
fn record_rvalue_scope_if_borrow_expr(visitor: &mut RegionResolutionVisitor,
expr: &ast::Expr,
blk_id: CodeExtent) {
/*!
* If `expr` matches the `E&` grammar, then records an extended
* rvalue scope as appropriate:
*
* E& = & ET
* | StructName { ..., f: E&, ... }
* | [ ..., E&, ... ]
* | ( ..., E&, ... )
* | {...; E&}
* | box E&
* | E& as ...
* | ( E& )
*/
match expr.node {
ast::ExprAddrOf(_, ref subexpr) => {
record_rvalue_scope_if_borrow_expr(visitor, &**subexpr, blk_id);
......@@ -787,29 +751,24 @@ fn record_rvalue_scope_if_borrow_expr(visitor: &mut RegionResolutionVisitor,
}
}
/// Applied to an expression `expr` if `expr` -- or something owned or partially owned by
/// `expr` -- is going to be indirectly referenced by a variable in a let statement. In that
/// case, the "temporary lifetime" or `expr` is extended to be the block enclosing the `let`
/// statement.
///
/// More formally, if `expr` matches the grammar `ET`, record the rvalue scope of the matching
/// `<rvalue>` as `blk_id`:
///
/// ET = *ET
/// | ET[...]
/// | ET.f
/// | (ET)
/// | <rvalue>
///
/// Note: ET is intended to match "rvalues or lvalues based on rvalues".
fn record_rvalue_scope<'a>(visitor: &mut RegionResolutionVisitor,
expr: &'a ast::Expr,
blk_scope: CodeExtent) {
/*!
* Applied to an expression `expr` if `expr` -- or something
* owned or partially owned by `expr` -- is going to be
* indirectly referenced by a variable in a let statement. In
* that case, the "temporary lifetime" or `expr` is extended
* to be the block enclosing the `let` statement.
*
* More formally, if `expr` matches the grammar `ET`, record
* the rvalue scope of the matching `<rvalue>` as `blk_id`:
*
* ET = *ET
* | ET[...]
* | ET.f
* | (ET)
* | <rvalue>
*
* Note: ET is intended to match "rvalues or
* lvalues based on rvalues".
*/
let mut expr = expr;
loop {
// Note: give all the expressions matching `ET` with the
......
......@@ -8,14 +8,12 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* Name resolution for lifetimes.
*
* Name resolution for lifetimes follows MUCH simpler rules than the
* full resolve. For example, lifetime names are never exported or
* used between functions, and they operate in a purely top-down
* way. Therefore we break lifetime name resolution into a separate pass.
*/
//! Name resolution for lifetimes.
//!
//! Name resolution for lifetimes follows MUCH simpler rules than the
//! full resolve. For example, lifetime names are never exported or
//! used between functions, and they operate in a purely top-down
//! way. Therefore we break lifetime name resolution into a separate pass.
pub use self::DefRegion::*;
use self::ScopeChain::*;
......@@ -254,34 +252,27 @@ fn with(&mut self, wrap_scope: ScopeChain, f: |&mut LifetimeContext|) {
}
/// Visits self by adding a scope and handling recursive walk over the contents with `walk`.
///
/// Handles visiting fns and methods. These are a bit complicated because we must distinguish
/// early- vs late-bound lifetime parameters. We do this by checking which lifetimes appear
/// within type bounds; those are early bound lifetimes, and the rest are late bound.
///
/// For example:
///
/// fn foo<'a,'b,'c,T:Trait<'b>>(...)
///
/// Here `'a` and `'c` are late bound but `'b` is early bound. Note that early- and late-bound
/// lifetimes may be interspersed together.
///
/// If early bound lifetimes are present, we separate them into their own list (and likewise
/// for late bound). They will be numbered sequentially, starting from the lowest index that is
/// already in scope (for a fn item, that will be 0, but for a method it might not be). Late
/// bound lifetimes are resolved by name and associated with a binder id (`binder_id`), so the
/// ordering is not important there.
fn visit_early_late(&mut self,
early_space: subst::ParamSpace,
generics: &ast::Generics,
walk: |&mut LifetimeContext|) {
/*!
* Handles visiting fns and methods. These are a bit
* complicated because we must distinguish early- vs late-bound
* lifetime parameters. We do this by checking which lifetimes
* appear within type bounds; those are early bound lifetimes,
* and the rest are late bound.
*
* For example:
*
* fn foo<'a,'b,'c,T:Trait<'b>>(...)
*
* Here `'a` and `'c` are late bound but `'b` is early
* bound. Note that early- and late-bound lifetimes may be
* interspersed together.
*
* If early bound lifetimes are present, we separate them into
* their own list (and likewise for late bound). They will be
* numbered sequentially, starting from the lowest index that
* is already in scope (for a fn item, that will be 0, but for
* a method it might not be). Late bound lifetimes are
* resolved by name and associated with a binder id (`binder_id`), so
* the ordering is not important there.
*/
let referenced_idents = early_bound_lifetime_names(generics);
debug!("visit_early_late: referenced_idents={}",
......@@ -479,13 +470,9 @@ pub fn early_bound_lifetimes<'a>(generics: &'a ast::Generics) -> Vec<ast::Lifeti
.collect()
}
/// Given a set of generic declarations, returns a list of names containing all early bound
/// lifetime names for those generics. (In fact, this list may also contain other names.)
fn early_bound_lifetime_names(generics: &ast::Generics) -> Vec<ast::Name> {
/*!
* Given a set of generic declarations, returns a list of names
* containing all early bound lifetime names for those
* generics. (In fact, this list may also contain other names.)
*/
// Create two lists, dividing the lifetimes into early/late bound.
// Initially, all of them are considered late, but we will move
// things from late into early as we go if we find references to
......
此差异已折叠。
......@@ -8,7 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*! See `doc.rs` for high-level documentation */
//! See `doc.rs` for high-level documentation
use super::SelectionContext;
use super::Obligation;
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
......@@ -8,9 +8,7 @@
// option. This file may not be copied, modified, or distributed
// except according to those terms.
/*!
* Code for type-checking closure expressions.
*/
//! Code for type-checking closure expressions.
use super::check_fn;
use super::{Expectation, ExpectCastableToType, ExpectHasType, NoExpectation};
......
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
此差异已折叠。
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册