提交 680b5a90 编写于 作者: H Hubert Czerpak 提交者: Fabian Hueske

[FLINK-2559] Clean up JavaDocs

- Remove broken HTML tags like <br/>, <p/>, ...
- close unclosed HTML tags
- replaces special chars by HTML escaping, e.g., '<' by &lt;
- wrap code examples by {@code}
- fix incorrect @see and @link references
- fix incorrect @throws
- fix typos

This closes #1298
上级 ec7bf50d
......@@ -696,7 +696,8 @@ public class CliFrontend {
* Creates a Packaged program from the given command line options.
*
* @return A PackagedProgram (upon success)
* @throws java.io.FileNotFoundException, org.apache.flink.client.program.ProgramInvocationException, java.lang.Throwable
* @throws java.io.FileNotFoundException
* @throws org.apache.flink.client.program.ProgramInvocationException
*/
protected PackagedProgram buildProgram(ProgramOptions options)
throws FileNotFoundException, ProgramInvocationException
......
......@@ -28,19 +28,16 @@ import org.apache.flink.storm.excamation.operators.ExclamationBolt;
* Implements the "Exclamation" program that attaches five exclamation mark to every line of a text files in a streaming
* fashion. The program is constructed as a regular {@link backtype.storm.generated.StormTopology} and submitted to
* Flink for execution in the same way as to a Storm {@link backtype.storm.LocalCluster}.
* <p/>
* <p>
* This example shows how to run program directly within Java, thus it cannot be used to submit a
* {@link backtype.storm.generated.StormTopology} via Flink command line clients (ie, bin/flink).
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* Usage: <code>ExclamationLocal &lt;text path&gt; &lt;result path&gt;</code><br/>
* <p>
* Usage: <code>ExclamationLocal &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from
* {@link org.apache.flink.examples.java.wordcount.util.WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>run a regular Storm program locally on Flink</li>
......
......@@ -29,17 +29,14 @@ import org.apache.flink.storm.util.BoltPrintSink;
/**
* Implements the "Exclamation" program that attaches two exclamation marks to every line of a text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology}.
* <p/>
* <p/>
* fashion. The program is constructed as a regular {@link backtype.storm.generated.StormTopology}.
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>Exclamation[Local|RemoteByClient|RemoteBySubmitter] &lt;text path&gt;
* &lt;result path&gt;</code><br/>
* &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>construct a regular Storm topology as Flink program</li>
......
......@@ -31,17 +31,14 @@ import backtype.storm.utils.Utils;
/**
* Implements the "Exclamation" program that attaches 3+x exclamation marks to every line of a text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology}.
* <p/>
* <p/>
* fashion. The program is constructed as a regular {@link backtype.storm.generated.StormTopology}.
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage:
* <code>ExclamationWithmBolt &lt;text path&gt; &lt;result path&gt; &lt;number of exclamation marks&gt;</code><br/>
* <code>ExclamationWithmBolt &lt;text path&gt; &lt;result path&gt; &lt;number of exclamation marks&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData} with x=2.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>use a Bolt within a Flink Streaming program</li>
......
......@@ -32,16 +32,13 @@ import backtype.storm.utils.Utils;
/**
* Implements the "Exclamation" program that attaches six exclamation marks to every line of a text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology}.
* <p/>
* <p/>
* fashion. The program is constructed as a regular {@link backtype.storm.generated.StormTopology}.
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* Usage: <code>ExclamationWithSpout &lt;text path&gt; &lt;result path&gt;</code><br/>
* <p>
* Usage: <code>ExclamationWithSpout &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>use a Storm spout within a Flink Streaming program</li>
......
......@@ -33,14 +33,14 @@ import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
/**
* Implements a simple example with two declared output streams for the embedded spout.
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>handle multiple output stream of a spout</li>
* <li>accessing each stream by .split(...) and .select(...)</li>
* <li>strip wrapper data type SplitStreamType for further processing in Flink</li>
* </ul>
* <p/>
* <p>
* This example would work the same way for multiple bolt output streams.
*/
public class SpoutSplitExample {
......
......@@ -30,15 +30,12 @@ import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
/**
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The tokenizer step is performed by a {@link IRichBolt Bolt}.
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCount &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>use a Bolt within a Flink Streaming program.</li>
......
......@@ -36,15 +36,12 @@ import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The tokenizer step is performed by a {@link IRichBolt Bolt}. In contrast to {@link BoltTokenizerWordCount}
* the tokenizer's input is a POJO type and the single field is accessed by name.
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCount &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>how to access attributes by name within a Bolt for POJO type input streams
......
......@@ -38,15 +38,12 @@ import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The tokenizer step is performed by a {@link IRichBolt Bolt}. In contrast to {@link BoltTokenizerWordCount}
* the tokenizer's input is a {@link Tuple} type and the single field is accessed by name.
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCount &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>how to access attributes by name within a Bolt for {@link Tuple} type input streams
......
......@@ -34,15 +34,12 @@ import org.apache.flink.util.Collector;
/**
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The used data source is a {@link IRichSpout Spout}.
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCount &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>use a Spout within a Flink Streaming program.</li>
......@@ -89,7 +86,7 @@ public class SpoutSourceWordCount {
/**
* Implements the string tokenizer that splits sentences into words as a user-defined FlatMapFunction. The function
* takes a line (String) and splits it into multiple pairs in the form of "(word,1)" (Tuple2<String, Integer>).
* takes a line (String) and splits it into multiple pairs in the form of "(word,1)" ({@code Tuple2<String, Integer>}).
*/
public static final class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> {
private static final long serialVersionUID = 1L;
......
......@@ -29,18 +29,15 @@ import org.apache.flink.storm.api.FlinkTopologyBuilder;
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology} and submitted to Flink for execution in the
* same way as to a Storm {@link LocalCluster}.
* <p/>
* <p>
* This example shows how to run program directly within Java, thus it cannot be used to submit a {@link StormTopology}
* via Flink command line clients (ie, bin/flink).
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCountLocal &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>run a regular Storm program locally on Flink</li>
......
......@@ -30,18 +30,15 @@ import org.apache.flink.storm.api.FlinkTopologyBuilder;
* fashion. The program is constructed as a regular {@link StormTopology} and submitted to Flink for execution in the
* same way as to a Storm {@link LocalCluster}. In contrast to {@link WordCountLocal} all bolts access the field of
* input tuples by name instead of index.
* <p/>
* <p>
* This example shows how to run program directly within Java, thus it cannot be used to submit a {@link StormTopology}
* via Flink command line clients (ie, bin/flink).
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCountLocalByName &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>run a regular Storm program locally on Flink
......
......@@ -33,18 +33,15 @@ import org.apache.flink.storm.api.FlinkTopologyBuilder;
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology} and submitted to Flink for execution in the
* same way as to a Storm cluster similar to {@link NimbusClient}. The Flink cluster can be local or remote.
* <p/>
* <p>
* This example shows how to submit the program via Java, thus it cannot be used to submit a {@link StormTopology} via
* Flink command line clients (ie, bin/flink).
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCountRemoteByClient &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>submit a regular Storm program to a local or remote Flink cluster.</li>
......
......@@ -30,17 +30,14 @@ import org.apache.flink.storm.api.FlinkTopologyBuilder;
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology} and submitted to Flink for execution in the
* same way as to a Storm cluster similar to {@link StormSubmitter}. The Flink cluster can be local or remote.
* <p/>
* <p>
* This example shows how to submit the program via Java as well as Flink's command line client (ie, bin/flink).
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage: <code>WordCountRemoteBySubmitter &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>submit a regular Storm program to a local or remote Flink cluster.</li>
......
......@@ -36,16 +36,13 @@ import org.apache.flink.storm.wordcount.operators.WordCountInMemorySpout;
/**
* Implements the "WordCount" program that computes a simple word occurrence histogram over text files in a streaming
* fashion. The program is constructed as a regular {@link StormTopology}.
* <p/>
* <p/>
* <p>
* The input is a plain text file with lines separated by newline characters.
* <p/>
* <p/>
* <p>
* Usage:
* <code>WordCount[Local|LocalByName|RemoteByClient|RemoteBySubmitter] &lt;text path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from {@link WordCountData}.
* <p/>
* <p/>
* <p>
* This example shows how to:
* <ul>
* <li>how to construct a regular Storm topology as Flink program</li>
......
......@@ -138,7 +138,7 @@ public class FlinkClient {
/**
* Return a reference to itself.
* <p/>
* <p>
* {@link FlinkClient} mimics both, {@link NimbusClient} and {@link Nimbus}{@code .Client}, at once.
*
* @return A reference to itself.
......
......@@ -55,8 +55,8 @@ import java.util.Set;
/**
* {@link FlinkTopologyBuilder} mimics a {@link TopologyBuilder}, but builds a Flink program instead of a Storm
* topology. Most methods (except {@link #createTopology()} are copied from the original {@link TopologyBuilder}
* implementation to ensure equal behavior.<br />
* <br />
* implementation to ensure equal behavior.<br>
* <br>
* <strong>CAUTION: {@link IRichStateSpout StateSpout}s are currently not supported.</strong>
*/
public class FlinkTopologyBuilder {
......
......@@ -20,10 +20,11 @@ package org.apache.flink.storm.util;
import org.apache.flink.streaming.api.datastream.DataStream;
/**
* Used by {@link org.apache.flink.storm.wrappers.AbstractStormCollector AbstractStormCollector} to wrap
* Used by org.apache.flink.storm.wrappers.AbstractStormCollector to wrap
* output tuples if multiple output streams are declared. For this case, the Flink output data stream must be split via
* {@link DataStream#split(org.apache.flink.streaming.api.collector.selector.OutputSelector) .split(...)} using
* {@link StormStreamSelector}.
*
*/
public class SplitStreamType<T> {
......
......@@ -42,8 +42,8 @@ import com.google.common.collect.Sets;
* A {@link BoltWrapper} wraps an {@link IRichBolt} in order to execute the Storm bolt within a Flink Streaming
* program. It takes the Flink input tuples of type {@code IN} and transforms them into {@link StormTuple}s that the
* bolt can process. Furthermore, it takes the bolt's output tuples and transforms them into Flink tuples of type
* {@code OUT} (see {@link AbstractStormCollector} for supported types).<br />
* <br />
* {@code OUT} (see {@link AbstractStormCollector} for supported types).<br>
* <br>
* <strong>CAUTION: currently, only simple bolts are supported! (ie, bolts that do not use the Storm configuration
* <code>Map</code> or <code>TopologyContext</code> that is provided by the bolt's <code>open(..)</code> method.
* Furthermore, acking and failing of tuples as well as accessing tuple attributes by field names is not supported so
......
......@@ -38,15 +38,15 @@ import com.google.common.collect.Sets;
/**
* A {@link SpoutWrapper} wraps an {@link IRichSpout} in order to execute it within a Flink Streaming program. It
* takes the spout's output tuples and transforms them into Flink tuples of type {@code OUT} (see
* {@link SpoutCollector} for supported types).<br />
* <br />
* {@link SpoutCollector} for supported types).<br>
* <br>
* Per default, {@link SpoutWrapper} calls the wrapped spout's {@link IRichSpout#nextTuple() nextTuple()} method in
* an infinite loop.<br />
* an infinite loop.<br>
* Alternatively, {@link SpoutWrapper} can call {@link IRichSpout#nextTuple() nextTuple()} for a finite number of
* times and terminate automatically afterwards (for finite input streams). The number of {@code nextTuple()} calls can
* be specified as a certain number of invocations or can be undefined. In the undefined case, {@link SpoutWrapper}
* terminates if no record was emitted to the output collector for the first time during a call to
* {@link IRichSpout#nextTuple() nextTuple()}.<br />
* {@link IRichSpout#nextTuple() nextTuple()}.<br>
* If the given spout implements {@link FiniteSpout} interface and {@link #numberOfInvocations} is not provided or
* is {@code null}, {@link SpoutWrapper} calls {@link IRichSpout#nextTuple() nextTuple()} method until
* {@link FiniteSpout#reachedEnd()} returns true.
......@@ -258,7 +258,7 @@ public final class SpoutWrapper<OUT> extends RichParallelSourceFunction<OUT> {
/**
* {@inheritDoc}
* <p/>
* <p>
* Sets the {@link #isRunning} flag to {@code false}.
*/
@Override
......
......@@ -22,7 +22,7 @@ import java.util.List;
/**
* Entities which have been parsed out of the text of the
* {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
* {@link org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
*/
public class Entities {
......
......@@ -19,7 +19,7 @@ package org.apache.flink.contrib.tweetinputformat.model.tweet.entities;
/**
* Represents hashtags which have been parsed out of the
* {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet} text.
* {@link org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet} text.
*/
public class HashTags {
......
......@@ -21,7 +21,7 @@ import java.util.HashMap;
import java.util.Map;
/**
* Represents media elements uploaded with the {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
* Represents media elements uploaded with the {@link org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
*/
public class Media {
......
......@@ -19,7 +19,7 @@ package org.apache.flink.contrib.tweetinputformat.model.tweet.entities;
/**
* An array of financial symbols starting with the dollar sign extracted from the
* {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet} text.
* {@link org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet} text.
*/
public class Symbol {
......
......@@ -19,7 +19,7 @@ package org.apache.flink.contrib.tweetinputformat.model.tweet.entities;
/**
* Represents URLs included in the text of a Tweet or within textual fields of a
* {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.User.Users} object.
* {@link org.apache.flink.contrib.tweetinputformat.model.tweet.entities.UserMention} object.
*/
public class URL {
......
......@@ -19,7 +19,7 @@ package org.apache.flink.contrib.tweetinputformat.model.tweet.entities;
/**
* Represents other Twitter users mentioned in the text of the
* {@link package org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
* {@link org.apache.flink.contrib.tweetinputformat.model.tweet.Tweet}.
*/
public class UserMention {
......
......@@ -677,7 +677,7 @@ public class ExecutionConfig implements Serializable {
private static final long serialVersionUID = 1L;
/**
* Convert UserConfig into a Map<String, String> representation.
* Convert UserConfig into a {@code Map<String, String>} representation.
* This can be used by the runtime, for example for presenting the user config in the web frontend.
*
* @return Key/Value representation of the UserConfig, or null.
......
......@@ -33,10 +33,10 @@ public enum ExecutionMode {
* pipelined manner) are data flows that branch (one data set consumed by multiple
* operations) and re-join later:
* <pre>{@code
* DataSet data = ...;
* DataSet mapped1 = data.map(new MyMapper());
* DataSet mapped2 = data.map(new AnotherMapper());
* mapped1.join(mapped2).where(...).equalTo(...);
* DataSet data = ...;
* DataSet mapped1 = data.map(new MyMapper());
* DataSet mapped2 = data.map(new AnotherMapper());
* mapped1.join(mapped2).where(...).equalTo(...);
* }</pre>
*/
PIPELINED,
......
......@@ -23,7 +23,7 @@ import java.util.TreeMap;
/**
* Histogram accumulator, which builds a histogram in a distributed manner.
* Implemented as a Integer->Integer TreeMap, so that the entries are sorted
* Implemented as a Integer-&gt;Integer TreeMap, so that the entries are sorted
* according to the values.
*
* This class does not extend to continuous values later, because it makes no
......
......@@ -47,7 +47,7 @@ import org.apache.flink.types.Value;
* }
*
* public boolean filter (Double value) {
* if (value > 1000000.0) {
* if (value &gt; 1000000.0) {
* agg.aggregate(1);
* return false
* }
......
......@@ -39,7 +39,7 @@ public interface DataDistribution extends IOReadableWritable, Serializable {
* <p>
* Note: The last bucket's upper bound is actually discarded by many algorithms.
* The last bucket is assumed to hold all values <i>v</i> such that
* {@code v &gt; getBucketBoundary(n-1, n)}, where <i>n</i> is the number of buckets.
* {@code v > getBucketBoundary(n-1, n)}, where <i>n</i> is the number of buckets.
*
* @param bucketNum The number of the bucket for which to get the upper bound.
* @param totalNumBuckets The number of buckets to split the data into.
......
......@@ -28,12 +28,12 @@ import org.apache.flink.util.Collector;
* If a key is present in only one of the two inputs, it may be that one of the groups is empty.
* <p>
* The basic syntax for using CoGoup on two data sets is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> set1 = ...;
* DataSet<Y> set2 = ...;
*
* set1.coGroup(set2).where(<key-definition>).equalTo(<key-definition>).with(new MyCoGroupFunction());
* </blockquote></pre>
* }</pre>
* <p>
* {@code set1} is here considered the first input, {@code set2} the second input.
* <p>
......
......@@ -28,12 +28,12 @@ import java.io.Serializable;
* pair of elements, instead of processing 2-tuples that contain the pairs.
* <p>
* The basic syntax for using Cross on two data sets is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> set1 = ...;
* DataSet<Y> set2 = ...;
*
* set1.cross(set2).with(new MyCrossFunction());
* </blockquote></pre>
* }</pre>
* <p>
* {@code set1} is here considered the first input, {@code set2} the second input.
*
......
......@@ -25,11 +25,11 @@ import java.io.Serializable;
* The predicate decides whether to keep the element, or to discard it.
* <p>
* The basic syntax for using a FilterFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<X> result = input.filter(new MyFilterFunction());
* </blockquote></pre>
* }</pre>
* <p>
* <strong>IMPORTANT:</strong> The system assumes that the function does not
* modify the elements on which the predicate is applied. Violating this assumption
......
......@@ -34,12 +34,12 @@ import java.io.Serializable;
* if their key is not contained in the other data set.
* <p>
* The basic syntax for using Join on two data sets is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> set1 = ...;
* DataSet<Y> set2 = ...;
*
* set1.join(set2).where(<key-definition>).equalTo(<key-definition>).with(new MyJoinFunction());
* </blockquote></pre>
* }</pre>
* <p>
* {@code set1} is here considered the first input, {@code set2} the second input.
* <p>
......
......@@ -29,11 +29,11 @@ import java.io.Serializable;
* use the {@link MapFunction}.
* <p>
* The basic syntax for using a FlatMapFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<Y> result = input.flatMap(new MyFlatMapFunction());
* </blockquote></pre>
* }</pre>
*
* @param <T> Type of the input elements.
* @param <O> Type of the returned elements.
......
......@@ -25,12 +25,12 @@ import java.io.Serializable;
* a single value, by applying a binary operation to an initial accumulator element every element from a group elements.
* <p>
* The basic syntax for using a FoldFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* X initialValue = ...;
* DataSet<X> result = input.fold(new MyFoldFunction(), initialValue);
* </blockquote></pre>
* }</pre>
* <p>
* Like all functions, the FoldFunction needs to be serializable, as defined in {@link java.io.Serializable}.
*
......
......@@ -32,11 +32,11 @@ import org.apache.flink.util.Collector;
* {@link ReduceFunction}.
* <p>
* The basic syntax for using a grouped GroupReduceFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<X> result = input.groupBy(<key-definition>).reduceGroup(new MyGroupReduceFunction());
* </blockquote></pre>
* }</pre>
*
* @param <T> Type of the elements that this function processes.
* @param <O> The type of the elements returned by the user-defined function.
......
......@@ -29,12 +29,12 @@ import java.io.Serializable;
* if their key is not contained in the other data set.
* <p>
* The basic syntax for using Join on two data sets is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> set1 = ...;
* DataSet<Y> set2 = ...;
*
* set1.join(set2).where(<key-definition>).equalTo(<key-definition>).with(new MyJoinFunction());
* </blockquote></pre>
* }</pre>
* <p>
* {@code set1} is here considered the first input, {@code set2} the second input.
* <p>
......
......@@ -28,11 +28,11 @@ import java.io.Serializable;
* using the {@link FlatMapFunction}.
* <p>
* The basic syntax for using a MapFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<Y> result = input.map(new MyMapFunction());
* </blockquote></pre>
* }</pre>
*
* @param <T> Type of the input elements.
* @param <O> Type of the returned elements.
......
......@@ -31,11 +31,11 @@ import java.io.Serializable;
* For most of the simple use cases, consider using the {@link MapFunction} or {@link FlatMapFunction}.
* <p>
* The basic syntax for a MapPartitionFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<Y> result = input.mapPartition(new MyMapPartitionFunction());
* </blockquote></pre>
* }</pre>
*
* @param <T> Type of the input elements.
* @param <O> Type of the returned elements.
......
......@@ -32,11 +32,11 @@ import java.io.Serializable;
* execution strategies.
* <p>
* The basic syntax for using a grouped ReduceFunction is as follows:
* <pre><blockquote>
* <pre>{@code
* DataSet<X> input = ...;
*
* DataSet<X> result = input.groupBy(<key-definition>).reduce(new MyReduceFunction());
* </blockquote></pre>
* }</pre>
* <p>
* Like all functions, the ReduceFunction needs to be serializable, as defined in {@link java.io.Serializable}.
*
......
......@@ -49,7 +49,7 @@ public abstract class RichGroupReduceFunction<IN, OUT> extends AbstractRichFunct
* <p>
* This method is only ever invoked when the subclass of {@link RichGroupReduceFunction}
* adds the {@link Combinable} annotation, or if the <i>combinable</i> flag is set when defining
* the <i>reduceGroup<i> operation via
* the <i>reduceGroup</i> operation via
* org.apache.flink.api.java.operators.GroupReduceOperator#setCombinable(boolean).
* <p>
* Since the reduce function will be called on the result of this method, it is important that this
......
......@@ -51,7 +51,7 @@ public abstract class FileOutputFormat<IT> extends RichOutputFormat<IT> implemen
/** A directory is always created, regardless of number of write tasks. */
ALWAYS,
/** A directory is only created for parallel output tasks, i.e., number of output tasks > 1.
/** A directory is only created for parallel output tasks, i.e., number of output tasks &gt; 1.
* If number of output tasks = 1, the output is written to a single file. */
PARONLY
}
......
......@@ -105,7 +105,7 @@ public abstract class AbstractUdfOperator<OUT, FT extends Function> extends Oper
* Clears all previous broadcast inputs and binds the given inputs as
* broadcast variables of this operator.
*
* @param inputs The <name, root> pairs to be set as broadcast inputs.
* @param inputs The {@code<name, root>} pairs to be set as broadcast inputs.
*/
public <T> void setBroadcastVariables(Map<String, Operator<T>> inputs) {
this.broadcastInputs.clear();
......
......@@ -76,7 +76,7 @@ public abstract class TypeComparator<T> implements Serializable {
* of the fields from the record, this method may extract those fields.
* <p>
* A typical example for checking the equality of two elements is the following:
* <pre>
* <pre>{@code
* E e1 = ...;
* E e2 = ...;
*
......@@ -84,7 +84,7 @@ public abstract class TypeComparator<T> implements Serializable {
*
* acc.setReference(e1);
* boolean equal = acc.equalToReference(e2);
* </pre>
* }</pre>
*
* The rational behind this method is that elements are typically compared using certain features that
* are extracted from them, (such de-serializing as a subset of fields). When setting the
......@@ -113,7 +113,7 @@ public abstract class TypeComparator<T> implements Serializable {
* elements {@code e1} and {@code e2} via a comparator, this method can be used the
* following way.
*
* <pre>
* <pre>{@code
* E e1 = ...;
* E e2 = ...;
*
......@@ -124,7 +124,7 @@ public abstract class TypeComparator<T> implements Serializable {
* acc2.setReference(e2);
*
* int comp = acc1.compareToReference(acc2);
* </pre>
* }</pre>
*
* The rational behind this method is that elements are typically compared using certain features that
* are extracted from them, (such de-serializing as a subset of fields). When setting the
......
......@@ -600,7 +600,7 @@ public final class ConfigConstants {
public static final boolean DEFAULT_FILESYSTEM_OVERWRITE = false;
/**
* The default behavior for output directory creating (create only directory when parallelism > 1).
* The default behavior for output directory creating (create only directory when parallelism &gt; 1).
*/
public static final boolean DEFAULT_FILESYSTEM_ALWAYS_CREATE_DIRECTORY = false;
......
......@@ -162,8 +162,6 @@ public abstract class FileSystem {
*
* @return a reference to the {@link FileSystem} instance for accessing the
* local file system.
* @throws IOException
* thrown if a reference to the file system instance could not be obtained
*/
public static FileSystem getLocalFileSystem() {
// this should really never fail.
......@@ -485,20 +483,20 @@ public abstract class FileSystem {
/**
* Initializes output directories on local file systems according to the given write mode.
*
* WriteMode.CREATE & parallel output:
* WriteMode.CREATE &amp; parallel output:
* - A directory is created if the output path does not exist.
* - An existing directory is reused, files contained in the directory are NOT deleted.
* - An existing file raises an exception.
*
* WriteMode.CREATE & NONE parallel output:
* WriteMode.CREATE &amp; NONE parallel output:
* - An existing file or directory raises an exception.
*
* WriteMode.OVERWRITE & parallel output:
* WriteMode.OVERWRITE &amp; parallel output:
* - A directory is created if the output path does not exist.
* - An existing directory is reused, files contained in the directory are NOT deleted.
* - An existing file is deleted and replaced by a new directory.
*
* WriteMode.OVERWRITE & NONE parallel output:
* WriteMode.OVERWRITE &amp; NONE parallel output:
* - An existing file or directory (and all its content) is deleted
*
* Files contained in an existing directory are not deleted, because multiple instances of a
......@@ -646,19 +644,19 @@ public abstract class FileSystem {
/**
* Initializes output directories on distributed file systems according to the given write mode.
*
* WriteMode.CREATE & parallel output:
* WriteMode.CREATE &amp; parallel output:
* - A directory is created if the output path does not exist.
* - An existing file or directory raises an exception.
*
* WriteMode.CREATE & NONE parallel output:
* WriteMode.CREATE &amp; NONE parallel output:
* - An existing file or directory raises an exception.
*
* WriteMode.OVERWRITE & parallel output:
* WriteMode.OVERWRITE &amp; parallel output:
* - A directory is created if the output path does not exist.
* - An existing directory and its content is deleted and a new directory is created.
* - An existing file is deleted and replaced by a new directory.
*
* WriteMode.OVERWRITE & NONE parallel output:
* WriteMode.OVERWRITE &amp; NONE parallel output:
* - An existing file or directory is deleted and replaced by a new directory.
*
* @param outPath Output path that should be prepared.
......
......@@ -210,7 +210,7 @@ public abstract class MemorySegment {
}
/**
* Wraps the chunk of the underlying memory located between <tt>offset<tt> and
* Wraps the chunk of the underlying memory located between <tt>offset</tt> and
* <tt>length</tt> in a NIO ByteBuffer.
*
* @param offset The offset in the memory segment.
......@@ -1220,7 +1220,7 @@ public abstract class MemorySegment {
* @param offset2 Offset of seg2 to start comparing
* @param len Length of the compared memory region
*
* @return 0 if equal, -1 if seg1 < seg2, 1 otherwise
* @return 0 if equal, -1 if seg1 &lt; seg2, 1 otherwise
*/
public final int compare(MemorySegment seg2, int offset1, int offset2, int len) {
while (len >= 8) {
......
......@@ -803,7 +803,7 @@ public final class Record implements Value, CopyableValue<Record> {
* Bin-copies fields from a source record to this record. The following caveats apply:
*
* If the source field is in a modified state, no binary representation will exist yet.
* In that case, this method is equivalent to setField(..., source.getField(..., <class>)).
* In that case, this method is equivalent to {@code setField(..., source.getField(..., <class>))}.
* In particular, if setValue is called on the source field Value instance, that change
* will propagate to this record.
*
......
......@@ -377,7 +377,7 @@ public class StringValue implements NormalizableKey<StringValue>, CharSequence,
* @param prefix The prefix character sequence.
* @param startIndex The position to start checking for the prefix.
*
* @return True, if this StringValue substring, starting at position <code>startIndex</code> has </code>prefix</code>
* @return True, if this StringValue substring, starting at position <code>startIndex</code> has <code>prefix</code>
* as its prefix.
*/
public boolean startsWith(CharSequence prefix, int startIndex) {
......@@ -403,7 +403,7 @@ public class StringValue implements NormalizableKey<StringValue>, CharSequence,
*
* @param prefix The prefix character sequence.
*
* @return True, if this StringValue has </code>prefix</code> as its prefix.
* @return True, if this StringValue has <code>prefix</code> as its prefix.
*/
public boolean startsWith(CharSequence prefix) {
return startsWith(prefix, 0);
......
......@@ -38,7 +38,7 @@ public interface Visitable<T extends Visitable<T>> {
* and then invokes the post-visit method.
* <p>
* A typical code example is the following:
* <code>
* <pre>{@code
* public void accept(Visitor<Operator> visitor) {
* boolean descend = visitor.preVisit(this);
* if (descend) {
......@@ -48,7 +48,7 @@ public interface Visitable<T extends Visitable<T>> {
* visitor.postVisit(this);
* }
* }
* </code>
* }</pre>
*
* @param visitor The visitor to be called with this object as the parameter.
*
......
......@@ -198,7 +198,7 @@ public class KMeans {
// USER FUNCTIONS
// *************************************************************************
/** Converts a Tuple2<Double,Double> into a Point. */
/** Converts a {@code Tuple2<Double,Double>} into a Point. */
@ForwardedFields("0->x; 1->y")
public static final class TuplePointConverter implements MapFunction<Tuple2<Double, Double>, Point> {
......@@ -208,7 +208,7 @@ public class KMeans {
}
}
/** Converts a Tuple3<Integer, Double,Double> into a Centroid. */
/** Converts a {@code Tuple3<Integer, Double,Double>} into a Centroid. */
@ForwardedFields("0->id; 1->x; 2->y")
public static final class TupleCentroidConverter implements MapFunction<Tuple3<Integer, Double, Double>, Centroid> {
......
......@@ -66,6 +66,8 @@ public class KMeansDataGenerator {
* <li><b>Optional</b> Double: Value range of cluster centers
* <li><b>Optional</b> Long: Random seed
* </ol>
*
* @throws IOException
*/
public static void main(String[] args) throws IOException {
......
......@@ -51,7 +51,7 @@ import java.util.Map;
* (see <a href="http://hadoop.apache.org/docs/r1.2.1/distcp.html">http://hadoop.apache.org/docs/r1.2.1/distcp.html</a>)
* with a dynamic input format
* Note that this tool does not deal with retriability. Additionally, empty directories are not copied over.
* <p/>
* <p>
* When running locally, local file systems paths can be used.
* However, in a distributed environment HDFS paths must be provided both as input and output.
*/
......
......@@ -53,7 +53,7 @@ import org.apache.flink.examples.java.graph.util.PageRankData;
* For example <code>"1\n2\n12\n42\n63"</code> gives five pages with IDs 1, 2, 12, 42, and 63.
* <li>Links are represented as pairs of page IDs which are separated by space
* characters. Links are separated by new-line characters.<br>
* For example <code>"1 2\n2 12\n1 12\n42 63"</code> gives four (directed) links (1)->(2), (2)->(12), (1)->(12), and (42)->(63).<br>
* For example <code>"1 2\n2 12\n1 12\n42 63"</code> gives four (directed) links (1)-&gt;(2), (2)-&gt;(12), (1)-&gt;(12), and (42)-&gt;(63).<br>
* For this simple implementation it is required that each page has at least one incoming and one outgoing link (a page can point to itself).
* </ul>
*
......
......@@ -182,7 +182,7 @@ public class LinearRegression {
// USER FUNCTIONS
// *************************************************************************
/** Converts a Tuple2<Double,Double> into a Data. */
/** Converts a {@code Tuple2<Double,Double>} into a Data. */
@ForwardedFields("0->x; 1->y")
public static final class TupleDataConverter implements MapFunction<Tuple2<Double, Double>, Data> {
......@@ -192,7 +192,7 @@ public class LinearRegression {
}
}
/** Converts a Tuple2<Double,Double> into a Params. */
/** Converts a {@code Tuple2<Double,Double>} into a Params. */
@ForwardedFields("0->theta0; 1->theta1")
public static final class TupleParamsConverter implements MapFunction<Tuple2<Double, Double>,Params> {
......
......@@ -39,7 +39,7 @@ import org.apache.flink.api.java.ExecutionEnvironment;
* This program implements the following SQL equivalent:
*
* <p>
* <code><pre>
* <pre>{@code
* SELECT
* c_custkey,
* c_name,
......@@ -64,7 +64,7 @@ import org.apache.flink.api.java.ExecutionEnvironment;
* c_acctbal,
* n_name,
* c_address
* </pre></code>
* }</pre>
*
* <p>
* Compared to the original TPC-H query this version does not print
......
......@@ -42,7 +42,7 @@ import org.apache.flink.api.java.tuple.Tuple4;
* This program implements the following SQL equivalent:
*
* <p>
* <code><pre>
* <pre>{@code
* SELECT
* l_orderkey,
* SUM(l_extendedprice*(1-l_discount)) AS revenue,
......@@ -61,7 +61,7 @@ import org.apache.flink.api.java.tuple.Tuple4;
* l_orderkey,
* o_orderdate,
* o_shippriority;
* </pre></code>
* }</pre>
*
* <p>
* Compared to the original TPC-H query this version does not sort the result by revenue
......
......@@ -35,7 +35,7 @@ import org.apache.flink.examples.java.relational.util.WebLogDataGenerator;
* This program processes web logs and relational data.
* It implements the following relational query:
*
* <code><pre>
* <pre>{@code
* SELECT
* r.pageURL,
* r.pageRank,
......@@ -50,13 +50,13 @@ import org.apache.flink.examples.java.relational.util.WebLogDataGenerator;
* WHERE v.destUrl = d.url
* AND v.visitDate < [date]
* );
* </pre></code>
* }</pre>
*
* <p>
* Input files are plain text CSV files using the pipe character ('|') as field separator.
* The tables referenced in the query can be generated using the {@link WebLogDataGenerator} and
* have the following schemas
* <code><pre>
* <pre>{@code
* CREATE TABLE Documents (
* url VARCHAR(100) PRIMARY KEY,
* contents TEXT );
......@@ -76,7 +76,7 @@ import org.apache.flink.examples.java.relational.util.WebLogDataGenerator;
* languageCode VARCHAR(6),
* searchWord VARCHAR(32),
* duration INT );
* </pre></code>
* }</pre>
*
* <p>
* Usage: <code>WebLogAnalysis &lt;documents path&gt; &lt;ranks path&gt; &lt;visits path&gt; &lt;result path&gt;</code><br>
......
......@@ -112,7 +112,7 @@ public class PojoExample {
/**
* Implements the string tokenizer that splits sentences into words as a user-defined
* FlatMapFunction. The function takes a line (String) and splits it into
* multiple pairs in the form of "(word,1)" (Tuple2<String, Integer>).
* multiple pairs in the form of "(word,1)" ({@code Tuple2<String, Integer>}).
*/
public static final class Tokenizer implements FlatMapFunction<String, Word> {
private static final long serialVersionUID = 1L;
......
......@@ -90,7 +90,7 @@ public class WordCount {
/**
* Implements the string tokenizer that splits sentences into words as a user-defined
* FlatMapFunction. The function takes a line (String) and splits it into
* multiple pairs in the form of "(word,1)" (Tuple2<String, Integer>).
* multiple pairs in the form of "(word,1)" ({@code Tuple2<String, Integer>}).
*/
public static final class Tokenizer implements FlatMapFunction<String, Tuple2<String, Integer>> {
......
......@@ -22,7 +22,7 @@ import org.apache.flink.api.common.ProgramDescription;
import org.apache.flink.examples.java.wordcount.util.WordCountData;
/**
* Same as {@link WorkCount} but implements {@link ProgramDescription} interface.
* Same as {@link WordCount} but implements {@link ProgramDescription} interface.
*
* <p>
* The input is a plain text file with lines separated by newline characters.
......@@ -35,7 +35,7 @@ import org.apache.flink.examples.java.wordcount.util.WordCountData;
* This example shows:
* <ul>
* <li>how to provide additional information (using {@link ProgramDescription} interface}, that can be displayed by
* Flink clients, ie, {@link bin/flink} and WebClient</li>
* Flink clients, ie, bin/flink and WebClient</li>
* </ul>
*
*/
......
......@@ -63,7 +63,6 @@ public final class Utils {
* Returns all GenericTypeInfos contained in a composite type.
*
* @param typeInfo
* @return
*/
public static void getContainedGenericTypes(CompositeType typeInfo, List<GenericTypeInfo<?>> target) {
for(int i = 0; i < typeInfo.getArity(); i++) {
......
......@@ -33,8 +33,8 @@ import org.apache.flink.api.common.InvalidProgramException;
* Semantic annotations can help the Flink optimizer to generate more efficient execution plans for Flink programs.
* For example, a <i>ForwardedFields</i> assertion for a map-type function can be declared as:
*
* <pre><blockquote>
* \@ForwardedFields({"f0; f1->f2"})
* <pre>{@code
* {@literal @}ForwardedFields({"f0; f1->f2"})
* public class MyMapper extends MapFunction<Tuple3<String, String, Integer>, Tuple3<String, Integer, Integer>>
* {
* public Tuple3<String, Integer, Integer> map(Tuple3<String, String, Integer> val) {
......@@ -42,7 +42,7 @@ import org.apache.flink.api.common.InvalidProgramException;
* return new Tuple3<String, Integer, Integer>(val.f0, val.f2, 1);
* }
* }
* </blockquote></pre>
* }</pre>
*
* <p>
* All annotations take Strings with expressions that refer to (nested) value fields of the input and output types of a function.
......@@ -84,17 +84,17 @@ public class FunctionAnnotation {
*
* Fields that are forwarded at the same position can be specified by their position.
* The specified position must be valid for the input and output data type and have the same type.
* For example <code>\@ForwardedFields({"f2"})</code> declares that the third field of a Java input tuple is
* For example {@code {@literal @}ForwardedFields({"f2"})} declares that the third field of a Java input tuple is
* copied to the third field of an output tuple.
*
* Fields which are unchanged copied to another position in the output are declared by specifying the
* source field expression in the input and the target field expression in the output.
* <code>\@ForwardedFields({"f0->f2"})</code> denotes that the first field of the Java input tuple is
* {@code {@literal @}ForwardedFields({"f0->f2"})} denotes that the first field of the Java input tuple is
* unchanged copied to the third field of the Java output tuple. When using the wildcard ("*") ensure that
* the number of declared fields and their types in input and output type match.
*
* Multiple forwarded fields can be annotated in one (<code>\@ForwardedFields({"f2; f3->f0; f4"})</code>)
* or separate Strings (<code>\@ForwardedFields({"f2", "f3->f0", "f4"})</code>).
* Multiple forwarded fields can be annotated in one ({@code {@literal @}ForwardedFields({"f2; f3->f0; f4"})})
* or separate Strings ({@code {@literal @}ForwardedFields({"f2", "f3->f0", "f4"})}).
*
* <b>NOTE: The use of the ForwardedFields annotation is optional.
* If used correctly, it can help the Flink optimizer to generate more efficient execution plans.
......@@ -119,17 +119,17 @@ public class FunctionAnnotation {
*
* Fields that are forwarded from the first input at the same position in the output can be
* specified by their position. The specified position must be valid for the input and output data type and have the same type.
* For example <code>\@ForwardedFieldsFirst({"f2"})</code> declares that the third field of a Java input tuple at the first input is
* For example {@code {@literal @}ForwardedFieldsFirst({"f2"})} declares that the third field of a Java input tuple at the first input is
* copied to the third field of an output tuple.
*
* Fields which are unchanged copied to another position in the output are declared by specifying the
* source field expression in the input and the target field expression in the output.
* <code>\@ForwardedFieldsFirst({"f0->f2"})</code> denotes that the first field of the Java input tuple at the first input is
* {@code {@literal @}ForwardedFieldsFirst({"f0->f2"})} denotes that the first field of the Java input tuple at the first input is
* unchanged copied to the third field of the Java output tuple. When using the wildcard ("*") ensure that
* the number of declared fields and their types in input and output type match.
*
* Multiple forwarded fields can be annotated in one (<code>\@ForwardedFieldsFirst({"f2; f3->f0; f4"})</code>)
* or separate Strings (<code>\@ForwardedFieldsFirst({"f2", "f3->f0", "f4"})</code>).
* Multiple forwarded fields can be annotated in one ({@code {@literal @}ForwardedFieldsFirst({"f2; f3->f0; f4"})})
* or separate Strings ({@code {@literal @}ForwardedFieldsFirst({"f2", "f3->f0", "f4"})}).
*
* <b>NOTE: The use of the ForwardedFieldsFirst annotation is optional.
* If used correctly, it can help the Flink optimizer to generate more efficient execution plans.
......@@ -157,17 +157,17 @@ public class FunctionAnnotation {
*
* Fields that are forwarded from the second input at the same position in the output can be
* specified by their position. The specified position must be valid for the input and output data type and have the same type.
* For example <code>\@ForwardedFieldsSecond({"f2"})</code> declares that the third field of a Java input tuple at the second input is
* For example {@code {@literal @}ForwardedFieldsSecond({"f2"})} declares that the third field of a Java input tuple at the second input is
* copied to the third field of an output tuple.
*
* Fields which are unchanged copied to another position in the output are declared by specifying the
* source field expression in the input and the target field expression in the output.
* <code>\@ForwardedFieldsSecond({"f0->f2"})</code> denotes that the first field of the Java input tuple at the second input is
* {@code {@literal @}ForwardedFieldsSecond({"f0->f2"})} denotes that the first field of the Java input tuple at the second input is
* unchanged copied to the third field of the Java output tuple. When using the wildcard ("*") ensure that
* the number of declared fields and their types in input and output type match.
*
* Multiple forwarded fields can be annotated in one (<code>\@ForwardedFieldsSecond({"f2; f3->f0; f4"})</code>)
* or separate Strings (<code>\@ForwardedFieldsSecond({"f2", "f3->f0", "f4"})</code>).
* Multiple forwarded fields can be annotated in one ({@code {@literal @}ForwardedFieldsSecond({"f2; f3->f0; f4"})})
* or separate Strings ({@code {@literal @}ForwardedFieldsSecond({"f2", "f3->f0", "f4"})}).
*
* <b>NOTE: The use of the ForwardedFieldsSecond annotation is optional.
* If used correctly, it can help the Flink optimizer to generate more efficient execution plans.
......
......@@ -30,7 +30,7 @@ import org.apache.hadoop.mapred.JobConf;
/**
* Wrapper for using HadoopInputFormats (mapred-variant) with Flink.
*
* The IF is returning a Tuple2<K,V>.
* The IF is returning a {@code Tuple2<K,V>}.
*
* @param <K> Type of the key
* @param <V> Type of the value.
......
......@@ -27,7 +27,7 @@ import org.apache.hadoop.mapred.OutputCommitter;
/**
* Wrapper for using HadoopOutputFormats (mapred-variant) with Flink.
*
* The IF is returning a Tuple2<K,V>.
* The IF is returning a {@code Tuple2<K,V>}.
*
* @param <K> Type of the key
* @param <V> Type of the value.
......
......@@ -95,7 +95,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
* </b>
*
* @param partitionFields The field positions of the partitioning keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsPartitionedBy(int... partitionFields) {
return this.splitsPartitionedBy(null, partitionFields);
......@@ -112,7 +112,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
*
* @param partitionMethodId An ID for the method that was used to partition the data across splits.
* @param partitionFields The field positions of the partitioning keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsPartitionedBy(String partitionMethodId, int... partitionFields) {
......@@ -142,7 +142,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
* </b>
*
* @param partitionFields The field expressions of the partitioning keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsPartitionedBy(String partitionFields) {
return this.splitsPartitionedBy(null, partitionFields);
......@@ -160,7 +160,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
*
* @param partitionMethodId An ID for the method that was used to partition the data across splits.
* @param partitionFields The field expressions of the partitioning keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsPartitionedBy(String partitionMethodId, String partitionFields) {
......@@ -194,7 +194,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
* </b>
*
* @param groupFields The field positions of the grouping keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsGroupedBy(int... groupFields) {
......@@ -224,7 +224,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
* </b>
*
* @param groupFields The field expressions of the grouping keys.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsGroupedBy(String groupFields) {
......@@ -257,7 +257,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
*
* @param orderFields The field positions of the grouping keys.
* @param orders The orders of the fields.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsOrderedBy(int[] orderFields, Order[] orders) {
......@@ -306,7 +306,7 @@ public class SplitDataProperties<T> implements GenericDataSourceBase.SplitDataPr
*
* @param orderFields The field expressions of the grouping key.
* @param orders The orders of the fields.
* @result This SplitDataProperties object.
* @return This SplitDataProperties object.
*/
public SplitDataProperties<T> splitsOrderedBy(String orderFields, Order[] orders) {
......
......@@ -420,7 +420,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
// --------------------------------------------------------------------------------------------
/**
* Intermediate step of a CoGroup transformation. <br/>
* Intermediate step of a CoGroup transformation. <br>
* To continue the CoGroup transformation, select the grouping key of the first input {@link DataSet} by calling
* {@link org.apache.flink.api.java.operators.CoGroupOperator.CoGroupOperatorSets#where(int...)} or {@link org.apache.flink.api.java.operators.CoGroupOperator.CoGroupOperatorSets#where(KeySelector)}.
*
......@@ -442,9 +442,9 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Continues a CoGroup transformation. <br/>
* Defines the {@link Tuple} fields of the first co-grouped {@link DataSet} that should be used as grouping keys.<br/>
* <b>Note: Fields can only be selected as grouping keys on Tuple DataSets.</b><br/>
* Continues a CoGroup transformation. <br>
* Defines the {@link Tuple} fields of the first co-grouped {@link DataSet} that should be used as grouping keys.<br>
* <b>Note: Fields can only be selected as grouping keys on Tuple DataSets.</b><br>
*
*
* @param fields The indexes of the Tuple fields of the first co-grouped DataSets that should be used as keys.
......@@ -459,7 +459,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Continues a CoGroup transformation. <br/>
* Continues a CoGroup transformation. <br>
* Defines the fields of the first co-grouped {@link DataSet} that should be used as grouping keys. Fields
* are the names of member fields of the underlying type of the data set.
*
......@@ -476,9 +476,9 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Continues a CoGroup transformation and defines a {@link KeySelector} function for the first co-grouped {@link DataSet}.</br>
* Continues a CoGroup transformation and defines a {@link KeySelector} function for the first co-grouped {@link DataSet}.<br>
* The KeySelector function is called for each element of the first DataSet and extracts a single
* key value on which the DataSet is grouped. </br>
* key value on which the DataSet is grouped. <br>
*
* @param keyExtractor The KeySelector function which extracts the key values from the DataSet on which it is grouped.
* @return An incomplete CoGroup transformation.
......@@ -495,7 +495,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
// ----------------------------------------------------------------------------------------
/**
* Intermediate step of a CoGroup transformation. <br/>
* Intermediate step of a CoGroup transformation. <br>
* To continue the CoGroup transformation, select the grouping key of the second input {@link DataSet} by calling
* {@link org.apache.flink.api.java.operators.CoGroupOperator.CoGroupOperatorSets.CoGroupOperatorSetsPredicate#equalTo(int...)} or {@link org.apache.flink.api.java.operators.CoGroupOperator.CoGroupOperatorSets.CoGroupOperatorSetsPredicate#equalTo(KeySelector)}.
*
......@@ -518,8 +518,8 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Continues a CoGroup transformation and defines the {@link Tuple} fields of the second co-grouped
* {@link DataSet} that should be used as grouping keys.<br/>
* <b>Note: Fields can only be selected as grouping keys on Tuple DataSets.</b><br/>
* {@link DataSet} that should be used as grouping keys.<br>
* <b>Note: Fields can only be selected as grouping keys on Tuple DataSets.</b><br>
*
*
* @param fields The indexes of the Tuple fields of the second co-grouped DataSet that should be used as keys.
......@@ -532,7 +532,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Continues a CoGroup transformation and defines the fields of the second co-grouped
* {@link DataSet} that should be used as grouping keys.<br/>
* {@link DataSet} that should be used as grouping keys.<br>
*
*
* @param fields The fields of the first co-grouped DataSets that should be used as keys.
......@@ -544,9 +544,9 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Continues a CoGroup transformation and defines a {@link KeySelector} function for the second co-grouped {@link DataSet}.</br>
* Continues a CoGroup transformation and defines a {@link KeySelector} function for the second co-grouped {@link DataSet}.<br>
* The KeySelector function is called for each element of the second DataSet and extracts a single
* key value on which the DataSet is grouped. </br>
* key value on which the DataSet is grouped. <br>
*
* @param keyExtractor The KeySelector function which extracts the key values from the second DataSet on which it is grouped.
* @return An incomplete CoGroup transformation.
......@@ -558,7 +558,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Intermediate step of a CoGroup transformation. <br/>
* Intermediate step of a CoGroup transformation. <br>
* To continue the CoGroup transformation, provide a {@link org.apache.flink.api.common.functions.RichCoGroupFunction} by calling
* {@link org.apache.flink.api.java.operators.CoGroupOperator.CoGroupOperatorSets.CoGroupOperatorSetsPredicate.CoGroupOperatorWithoutFunction#with(org.apache.flink.api.common.functions.CoGroupFunction)}.
*
......@@ -634,7 +634,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
}
/**
* Finalizes a CoGroup transformation by applying a {@link org.apache.flink.api.common.functions.RichCoGroupFunction} to groups of elements with identical keys.<br/>
* Finalizes a CoGroup transformation by applying a {@link org.apache.flink.api.common.functions.RichCoGroupFunction} to groups of elements with identical keys.<br>
* Each CoGroupFunction call returns an arbitrary number of keys.
*
* @param function The CoGroupFunction that is called for all groups of elements with identical keys.
......@@ -660,8 +660,8 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group in the first input on the
* specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br/>
* specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortFirstGroup(int, Order)} calls.
*
* @param field The Tuple field on which the group is sorted.
......@@ -690,8 +690,8 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group in the second input on the
* specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br/>
* specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortSecondGroup(int, Order)} calls.
*
* @param field The Tuple field on which the group is sorted.
......@@ -720,7 +720,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Sorts Pojo or {@link org.apache.flink.api.java.tuple.Tuple} elements within a group in the first input on the
* specified field in the specified {@link Order}.</br>
* specified field in the specified {@link Order}.<br>
* Groups can be sorted by multiple fields by chaining {@link #sortFirstGroup(String, Order)} calls.
*
* @param fieldExpression The expression to the field on which the group is to be sorted.
......@@ -745,7 +745,7 @@ public class CoGroupOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OU
/**
* Sorts Pojo or {@link org.apache.flink.api.java.tuple.Tuple} elements within a group in the second input on the
* specified field in the specified {@link Order}.</br>
* specified field in the specified {@link Order}.<br>
* Groups can be sorted by multiple fields by chaining {@link #sortSecondGroup(String, Order)} calls.
*
* @param fieldExpression The expression to the field on which the group is to be sorted.
......
......@@ -108,7 +108,7 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
// --------------------------------------------------------------------------------------------
/**
* A Cross transformation that wraps pairs of crossed elements into {@link Tuple2}.<br/>
* A Cross transformation that wraps pairs of crossed elements into {@link Tuple2}.<br>
* It also represents the {@link DataSet} that is the result of a Cross transformation.
*
* @param <I1> The type of the first input DataSet of the Cross transformation.
......@@ -137,7 +137,7 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Finalizes a Cross transformation by applying a {@link CrossFunction} to each pair of crossed elements.<br/>
* Finalizes a Cross transformation by applying a {@link CrossFunction} to each pair of crossed elements.<br>
* Each CrossFunction call returns exactly one element.
*
* @param function The CrossFunction that is called for each pair of crossed elements.
......@@ -157,9 +157,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Initiates a ProjectCross transformation and projects the first cross input<br/>
* Initiates a ProjectCross transformation and projects the first cross input<br>
* If the first cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.ProjectCross#projectFirst(int...)} and
......@@ -182,9 +182,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Initiates a ProjectCross transformation and projects the second cross input<br/>
* Initiates a ProjectCross transformation and projects the second cross input<br>
* If the second cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.ProjectCross#projectFirst(int...)} and
......@@ -210,7 +210,7 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
/**
* A Cross transformation that projects crossing elements or fields of crossing {@link Tuple Tuples}
* into result {@link Tuple Tuples}. <br/>
* into result {@link Tuple Tuples}. <br>
* It also represents the {@link DataSet} that is the result of a Cross transformation.
*
* @param <I1> The type of the first input DataSet of the Cross transformation.
......@@ -250,9 +250,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Continues a ProjectCross transformation and adds fields of the first cross input to the projection.<br/>
* Continues a ProjectCross transformation and adds fields of the first cross input to the projection.<br>
* If the first cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Additional fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.ProjectCross#projectFirst(int...)} and
......@@ -277,9 +277,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Continues a ProjectCross transformation and adds fields of the second cross input to the projection.<br/>
* Continues a ProjectCross transformation and adds fields of the second cross input to the projection.<br>
* If the second cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Additional fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.ProjectCross#projectFirst(int...)} and
......@@ -493,9 +493,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Continues a ProjectCross transformation and adds fields of the first cross input.<br/>
* Continues a ProjectCross transformation and adds fields of the first cross input.<br>
* If the first cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.CrossProjection#projectFirst(int...)} and
......@@ -559,9 +559,9 @@ public class CrossOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1, I2, OUT,
}
/**
* Continues a ProjectCross transformation and adds fields of the second cross input.<br/>
* Continues a ProjectCross transformation and adds fields of the second cross input.<br>
* If the second cross input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second cross input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.CrossOperator.CrossProjection#projectFirst(int...)} and
......
......@@ -95,8 +95,8 @@ public class DataSink<T> {
/**
* Sorts each local partition of a {@link org.apache.flink.api.java.tuple.Tuple} data set
* on the specified field in the specified {@link Order} before it is emitted by the output format.</br>
* <b>Note: Only tuple data sets can be sorted using integer field indices.</b><br/>
* on the specified field in the specified {@link Order} before it is emitted by the output format.<br>
* <b>Note: Only tuple data sets can be sorted using integer field indices.</b><br>
* The tuple data set can be sorted on multiple fields in different orders
* by chaining {@link #sortLocalOutput(int, Order)} calls.
*
......@@ -149,9 +149,9 @@ public class DataSink<T> {
/**
* Sorts each local partition of a data set on the field(s) specified by the field expression
* in the specified {@link Order} before it is emitted by the output format.</br>
* in the specified {@link Order} before it is emitted by the output format.<br>
* <b>Note: Non-composite types can only be sorted on the full element which is specified by
* a wildcard expression ("*" or "_").</b><br/>
* a wildcard expression ("*" or "_").</b><br>
* Data sets of composite types (Tuple or Pojo) can be sorted on multiple fields in different orders
* by chaining {@link #sortLocalOutput(String, Order)} calls.
*
......
......@@ -23,7 +23,7 @@ import org.apache.flink.api.common.functions.Partitioner;
import org.apache.flink.api.java.DataSet;
/**
* Grouping is an intermediate step for a transformation on a grouped DataSet.<br/>
* Grouping is an intermediate step for a transformation on a grouped DataSet.<br>
* The following transformation can be applied on Grouping:
* <ul>
* <li>{@link UnsortedGrouping#reduce(org.apache.flink.api.common.functions.ReduceFunction)},</li>
......
......@@ -184,7 +184,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
// --------------------------------------------------------------------------------------------
/**
* A Join transformation that applies a {@link JoinFunction} on each pair of joining elements.<br/>
* A Join transformation that applies a {@link JoinFunction} on each pair of joining elements.<br>
* It also represents the {@link DataSet} that is the result of a Join transformation.
*
* @param <I1> The type of the first input DataSet of the Join transformation.
......@@ -525,7 +525,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* A Join transformation that wraps pairs of joining elements into {@link Tuple2}.<br/>
* A Join transformation that wraps pairs of joining elements into {@link Tuple2}.<br>
* It also represents the {@link DataSet} that is the result of a Join transformation.
*
* @param <I1> The type of the first input DataSet of the Join transformation.
......@@ -545,7 +545,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Finalizes a Join transformation by applying a {@link org.apache.flink.api.common.functions.RichFlatJoinFunction} to each pair of joined elements.<br/>
* Finalizes a Join transformation by applying a {@link org.apache.flink.api.common.functions.RichFlatJoinFunction} to each pair of joined elements.<br>
* Each JoinFunction call returns exactly one element.
*
* @param function The JoinFunction that is called for each pair of joined elements.
......@@ -587,9 +587,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Applies a ProjectJoin transformation and projects the first join input<br/>
* Applies a ProjectJoin transformation and projects the first join input<br>
* If the first join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.ProjectJoin#projectFirst(int...)} and
......@@ -613,9 +613,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Applies a ProjectJoin transformation and projects the second join input<br/>
* Applies a ProjectJoin transformation and projects the second join input<br>
* If the second join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.ProjectJoin#projectFirst(int...)} and
......@@ -657,7 +657,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
/**
* A Join transformation that projects joining elements or fields of joining {@link Tuple Tuples}
* into result {@link Tuple Tuples}. <br/>
* into result {@link Tuple Tuples}. <br>
* It also represents the {@link DataSet} that is the result of a Join transformation.
*
* @param <I1> The type of the first input DataSet of the Join transformation.
......@@ -693,9 +693,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Continues a ProjectJoin transformation and adds fields of the first join input to the projection.<br/>
* Continues a ProjectJoin transformation and adds fields of the first join input to the projection.<br>
* If the first join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Additional fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.ProjectJoin#projectFirst(int...)} and
......@@ -720,9 +720,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Continues a ProjectJoin transformation and adds fields of the second join input to the projection.<br/>
* Continues a ProjectJoin transformation and adds fields of the second join input to the projection.<br>
* If the second join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Additional fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.ProjectJoin#projectFirst(int...)} and
......@@ -846,7 +846,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
// }
/**
* Intermediate step of a Join transformation. <br/>
* Intermediate step of a Join transformation. <br>
* To continue the Join transformation, select the join key of the first input {@link DataSet} by calling
* {@link JoinOperatorSets#where(int...)} or
* {@link JoinOperatorSets#where(org.apache.flink.api.java.functions.KeySelector)}.
......@@ -906,7 +906,7 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
/**
* Intermediate step of a Join transformation. <br/>
* Intermediate step of a Join transformation. <br>
* To continue the Join transformation, select the join key of the second input {@link DataSet} by calling
* {@link org.apache.flink.api.java.operators.JoinOperator.JoinOperatorSets.JoinOperatorSetsPredicate#equalTo(int...)} or
* {@link org.apache.flink.api.java.operators.JoinOperator.JoinOperatorSets.JoinOperatorSetsPredicate#equalTo(KeySelector)}.
......@@ -919,9 +919,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
/**
* Continues a Join transformation and defines the {@link Tuple} fields of the second join
* {@link DataSet} that should be used as join keys.<br/>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br/>
* <p/>
* {@link DataSet} that should be used as join keys.<br>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br>
* <p>
* The resulting {@link DefaultJoin} wraps each pair of joining elements into a {@link Tuple2}, with
* the element of the first input being the first field of the tuple and the element of the
* second input being the second field of the tuple.
......@@ -936,8 +936,8 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
/**
* Continues a Join transformation and defines the fields of the second join
* {@link DataSet} that should be used as join keys.<br/>
* <p/>
* {@link DataSet} that should be used as join keys.<br>
* <p>
* The resulting {@link DefaultJoin} wraps each pair of joining elements into a {@link Tuple2}, with
* the element of the first input being the first field of the tuple and the element of the
* second input being the second field of the tuple.
......@@ -951,10 +951,10 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Continues a Join transformation and defines a {@link KeySelector} function for the second join {@link DataSet}.</br>
* Continues a Join transformation and defines a {@link KeySelector} function for the second join {@link DataSet}.<br>
* The KeySelector function is called for each element of the second DataSet and extracts a single
* key value on which the DataSet is joined. </br>
* <p/>
* key value on which the DataSet is joined. <br>
* <p>
* The resulting {@link DefaultJoin} wraps each pair of joining elements into a {@link Tuple2}, with
* the element of the first input being the first field of the tuple and the element of the
* second input being the second field of the tuple.
......@@ -1143,9 +1143,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Continues a ProjectJoin transformation and adds fields of the first join input.<br/>
* Continues a ProjectJoin transformation and adds fields of the first join input.<br>
* If the first join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the first join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.JoinProjection#projectFirst(int...)} and
......@@ -1207,9 +1207,9 @@ public abstract class JoinOperator<I1, I2, OUT> extends TwoInputUdfOperator<I1,
}
/**
* Continues a ProjectJoin transformation and adds fields of the second join input.<br/>
* Continues a ProjectJoin transformation and adds fields of the second join input.<br>
* If the second join input is a {@link Tuple} {@link DataSet}, fields can be selected by their index.
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br/>
* If the second join input is not a Tuple DataSet, no parameters should be passed.<br>
*
* Fields of the first and second input can be added by chaining the method calls of
* {@link org.apache.flink.api.java.operators.JoinOperator.JoinProjection#projectFirst(int...)} and
......
......@@ -125,14 +125,14 @@ public abstract class SingleInputUdfOperator<IN, OUT, O extends SingleInputUdfOp
* <p>
* Fields which are unchanged copied to another position in the output are declared by specifying the
* source field reference in the input and the target field reference in the output.
* <code>withForwardedFields("f0->f2")</code> denotes that the first field of the Java input tuple is
* {@code withForwardedFields("f0->f2")} denotes that the first field of the Java input tuple is
* unchanged copied to the third field of the Java output tuple. When using a wildcard ("*") ensure that
* the number of declared fields and their types in input and output type match.
* </p>
*
* <p>
* Multiple forwarded fields can be annotated in one (<code>withForwardedFields("f2; f3->f0; f4")</code>)
* or separate Strings (<code>withForwardedFields("f2", "f3->f0", "f4")</code>).
* Multiple forwarded fields can be annotated in one ({@code withForwardedFields("f2; f3->f0; f4")})
* or separate Strings ({@code withForwardedFields("f2", "f3->f0", "f4")}).
* Please refer to the JavaDoc of {@link org.apache.flink.api.common.functions.Function} or Flink's documentation for
* details on field references such as nested fields and wildcard.
* </p>
......
......@@ -38,7 +38,7 @@ import org.apache.flink.api.java.typeutils.TypeExtractor;
import com.google.common.base.Preconditions;
/**
* SortedGrouping is an intermediate step for a transformation on a grouped and sorted DataSet.<br/>
* SortedGrouping is an intermediate step for a transformation on a grouped and sorted DataSet.<br>
* The following transformation can be applied on sorted groups:
* <ul>
* <li>{@link SortedGrouping#reduceGroup(org.apache.flink.api.common.functions.GroupReduceFunction)},</li>
......@@ -145,7 +145,7 @@ public class SortedGrouping<T> extends Grouping<T> {
}
/**
* Applies a GroupReduce transformation on a grouped and sorted {@link DataSet}.<br/>
* Applies a GroupReduce transformation on a grouped and sorted {@link DataSet}.<br>
* The transformation calls a {@link org.apache.flink.api.common.functions.RichGroupReduceFunction} for each group of the DataSet.
* A GroupReduceFunction can iterate over all elements of a group and emit any
* number of output elements including none.
......@@ -189,7 +189,7 @@ public class SortedGrouping<T> extends Grouping<T> {
/**
* Returns a new set containing the first n elements in this grouped and sorted {@link DataSet}.<br/>
* Returns a new set containing the first n elements in this grouped and sorted {@link DataSet}.<br>
* @param n The desired number of elements for each group.
* @return A GroupReduceOperator that represents the DataSet containing the elements.
*/
......@@ -206,8 +206,8 @@ public class SortedGrouping<T> extends Grouping<T> {
// --------------------------------------------------------------------------------------------
/**
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group on the specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple or Pojo elements can be sorted.</b><br/>
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group on the specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple or Pojo elements can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortGroup(int, Order)} calls.
*
* @param field The Tuple field on which the group is sorted.
......@@ -235,8 +235,8 @@ public class SortedGrouping<T> extends Grouping<T> {
}
/**
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} or POJO elements within a group on the specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple or Pojo elements can be sorted.</b><br/>
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} or POJO elements within a group on the specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple or Pojo elements can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortGroup(String, Order)} calls.
*
* @param field The Tuple or Pojo field on which the group is sorted.
......
......@@ -127,14 +127,14 @@ public abstract class TwoInputUdfOperator<IN1, IN2, OUT, O extends TwoInputUdfOp
* <p>
* Fields which are unchanged copied from the first input to another position in the output are declared
* by specifying the source field reference in the first input and the target field reference in the output.
* <code>withForwardedFieldsFirst("f0->f2")</code> denotes that the first field of the first input Java tuple is
* {@code withForwardedFieldsFirst("f0->f2")} denotes that the first field of the first input Java tuple is
* unchanged copied to the third field of the Java output tuple. When using a wildcard ("*") ensure that
* the number of declared fields and their types in first input and output type match.
* </p>
*
* <p>
* Multiple forwarded fields can be annotated in one (<code>withForwardedFieldsFirst("f2; f3->f0; f4")</code>)
* or separate Strings (<code>withForwardedFieldsFirst("f2", "f3->f0", "f4")</code>).
* Multiple forwarded fields can be annotated in one ({@code withForwardedFieldsFirst("f2; f3->f0; f4")})
* or separate Strings ({@code withForwardedFieldsFirst("f2", "f3->f0", "f4")}).
* Please refer to the JavaDoc of {@link org.apache.flink.api.common.functions.Function} or Flink's documentation for
* details on field references such as nested fields and wildcard.
* </p>
......@@ -202,14 +202,14 @@ public abstract class TwoInputUdfOperator<IN1, IN2, OUT, O extends TwoInputUdfOp
* <p>
* Fields which are unchanged copied from the second input to another position in the output are declared
* by specifying the source field reference in the second input and the target field reference in the output.
* <code>withForwardedFieldsSecond("f0->f2")</code> denotes that the first field of the second input Java tuple is
* {@code withForwardedFieldsSecond("f0->f2")} denotes that the first field of the second input Java tuple is
* unchanged copied to the third field of the Java output tuple. When using a wildcard ("*") ensure that
* the number of declared fields and their types in second input and output type match.
* </p>
*
* <p>
* Multiple forwarded fields can be annotated in one (<code>withForwardedFieldsSecond("f2; f3->f0; f4")</code>)
* or separate Strings (<code>withForwardedFieldsSecond("f2", "f3->f0", "f4")</code>).
* Multiple forwarded fields can be annotated in one ({@code withForwardedFieldsSecond("f2; f3->f0; f4")})
* or separate Strings ({@code withForwardedFieldsSecond("f2", "f3->f0", "f4")}).
* Please refer to the JavaDoc of {@link org.apache.flink.api.common.functions.Function} or Flink's documentation for
* details on field references such as nested fields and wildcard.
* </p>
......
......@@ -62,7 +62,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
// --------------------------------------------------------------------------------------------
/**
* Applies an Aggregate transformation on a grouped {@link org.apache.flink.api.java.tuple.Tuple} {@link DataSet}.<br/>
* Applies an Aggregate transformation on a grouped {@link org.apache.flink.api.java.tuple.Tuple} {@link DataSet}.<br>
* <b>Note: Only Tuple DataSets can be aggregated.</b>
* The transformation applies a built-in {@link Aggregations Aggregation} on a specified field
* of a Tuple group. Additional aggregation functions can be added to the resulting
......@@ -120,7 +120,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Applies a Reduce transformation on a grouped {@link DataSet}.<br/>
* Applies a Reduce transformation on a grouped {@link DataSet}.<br>
* For each group, the transformation consecutively calls a {@link org.apache.flink.api.common.functions.RichReduceFunction}
* until only a single element for each group remains.
* A ReduceFunction combines two elements into one new element of the same type.
......@@ -140,7 +140,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Applies a GroupReduce transformation on a grouped {@link DataSet}.<br/>
* Applies a GroupReduce transformation on a grouped {@link DataSet}.<br>
* The transformation calls a {@link org.apache.flink.api.common.functions.RichGroupReduceFunction} for each group of the DataSet.
* A GroupReduceFunction can iterate over all elements of a group and emit any
* number of output elements including none.
......@@ -183,7 +183,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Returns a new set containing the first n elements in this grouped {@link DataSet}.<br/>
* Returns a new set containing the first n elements in this grouped {@link DataSet}.<br>
* @param n The desired number of elements for each group.
* @return A GroupReduceOperator that represents the DataSet containing the elements.
*/
......@@ -196,7 +196,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Applies a special case of a reduce transformation (minBy) on a grouped {@link DataSet}.<br/>
* Applies a special case of a reduce transformation (minBy) on a grouped {@link DataSet}.<br>
* The transformation consecutively calls a {@link ReduceFunction}
* until only a single element remains which is the result of the transformation.
* A ReduceFunction combines two elements into one new element of the same type.
......@@ -217,7 +217,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Applies a special case of a reduce transformation (maxBy) on a grouped {@link DataSet}.<br/>
* Applies a special case of a reduce transformation (maxBy) on a grouped {@link DataSet}.<br>
* The transformation consecutively calls a {@link ReduceFunction}
* until only a single element remains which is the result of the transformation.
* A ReduceFunction combines two elements into one new element of the same type.
......@@ -241,8 +241,8 @@ public class UnsortedGrouping<T> extends Grouping<T> {
// --------------------------------------------------------------------------------------------
/**
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group on the specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br/>
* Sorts {@link org.apache.flink.api.java.tuple.Tuple} elements within a group on the specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortGroup(int, Order)} calls.
*
* @param field The Tuple field on which the group is sorted.
......@@ -263,8 +263,8 @@ public class UnsortedGrouping<T> extends Grouping<T> {
}
/**
* Sorts Pojos within a group on the specified field in the specified {@link Order}.</br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br/>
* Sorts Pojos within a group on the specified field in the specified {@link Order}.<br>
* <b>Note: Only groups of Tuple elements and Pojos can be sorted.</b><br>
* Groups can be sorted by multiple fields by chaining {@link #sortGroup(String, Order)} calls.
*
* @param field The Tuple or Pojo field on which the group is sorted.
......@@ -285,7 +285,7 @@ public class UnsortedGrouping<T> extends Grouping<T> {
/**
* Sorts elements within a group on a key extracted by the specified {@link org.apache.flink.api.java.functions.KeySelector}
* in the specified {@link Order}.</br>
* in the specified {@link Order}.<br>
* Chaining {@link #sortGroup(KeySelector, Order)} calls is not supported.
*
* @param keySelector The KeySelector with which the group is sorted.
......
......@@ -33,7 +33,7 @@ import org.apache.flink.api.java.tuple.Tuple;
import org.apache.flink.api.java.typeutils.TypeExtractor;
/**
* Intermediate step of an Outer Join transformation. <br/>
* Intermediate step of an Outer Join transformation. <br>
* To continue the Join transformation, select the join key of the first input {@link DataSet} by calling
* {@link JoinOperatorSetsBase#where(int...)} or
* {@link JoinOperatorSetsBase#where(KeySelector)}.
......@@ -69,9 +69,9 @@ public class JoinOperatorSetsBase<I1, I2> {
}
/**
* Continues a Join transformation. <br/>
* Defines the {@link Tuple} fields of the first join {@link DataSet} that should be used as join keys.<br/>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br/>
* Continues a Join transformation. <br>
* Defines the {@link Tuple} fields of the first join {@link DataSet} that should be used as join keys.<br>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br>
*
* @param fields The indexes of the other Tuple fields of the first join DataSets that should be used as keys.
* @return An incomplete Join transformation.
......@@ -87,7 +87,7 @@ public class JoinOperatorSetsBase<I1, I2> {
}
/**
* Continues a Join transformation. <br/>
* Continues a Join transformation. <br>
* Defines the fields of the first join {@link DataSet} that should be used as grouping keys. Fields
* are the names of member fields of the underlying type of the data set.
*
......@@ -105,9 +105,9 @@ public class JoinOperatorSetsBase<I1, I2> {
}
/**
* Continues a Join transformation and defines a {@link KeySelector} function for the first join {@link DataSet}.</br>
* Continues a Join transformation and defines a {@link KeySelector} function for the first join {@link DataSet}.<br>
* The KeySelector function is called for each element of the first DataSet and extracts a single
* key value on which the DataSet is joined. </br>
* key value on which the DataSet is joined. <br>
*
* @param keySelector The KeySelector function which extracts the key values from the DataSet on which it is joined.
* @return An incomplete Join transformation.
......@@ -125,7 +125,7 @@ public class JoinOperatorSetsBase<I1, I2> {
/**
* Intermediate step of a Join transformation. <br/>
* Intermediate step of a Join transformation. <br>
* To continue the Join transformation, select the join key of the second input {@link DataSet} by calling
* {@link org.apache.flink.api.java.operators.join.JoinOperatorSetsBase.JoinOperatorSetsPredicateBase#equalTo(int...)} or
* {@link org.apache.flink.api.java.operators.join.JoinOperatorSetsBase.JoinOperatorSetsPredicateBase#equalTo(KeySelector)}.
......@@ -149,8 +149,8 @@ public class JoinOperatorSetsBase<I1, I2> {
/**
* Continues a Join transformation and defines the {@link Tuple} fields of the second join
* {@link DataSet} that should be used as join keys.<br/>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br/>
* {@link DataSet} that should be used as join keys.<br>
* <b>Note: Fields can only be selected as join keys on Tuple DataSets.</b><br>
*
* The resulting {@link JoinFunctionAssigner} needs to be finished by providing a
* {@link JoinFunction} by calling {@link JoinFunctionAssigner#with(JoinFunction)}
......@@ -164,7 +164,7 @@ public class JoinOperatorSetsBase<I1, I2> {
/**
* Continues a Join transformation and defines the fields of the second join
* {@link DataSet} that should be used as join keys.<br/>
* {@link DataSet} that should be used as join keys.<br>
*
* The resulting {@link JoinFunctionAssigner} needs to be finished by providing a
* {@link JoinFunction} by calling {@link JoinFunctionAssigner#with(JoinFunction)}
......@@ -177,9 +177,9 @@ public class JoinOperatorSetsBase<I1, I2> {
}
/**
* Continues a Join transformation and defines a {@link KeySelector} function for the second join {@link DataSet}.</br>
* Continues a Join transformation and defines a {@link KeySelector} function for the second join {@link DataSet}.<br>
* The KeySelector function is called for each element of the second DataSet and extracts a single
* key value on which the DataSet is joined. </br>
* key value on which the DataSet is joined. <br>
*
* The resulting {@link JoinFunctionAssigner} needs to be finished by providing a
* {@link JoinFunction} by calling {@link JoinFunctionAssigner#with(JoinFunction)}
......
......@@ -22,7 +22,7 @@ import org.apache.flink.api.java.tuple.Tuple3;
import org.apache.flink.util.Collector;
/**
* Needed to wrap tuples to Tuple3<groupKey, sortKey, value> for combine method of group reduce with key selector sorting
* Needed to wrap tuples to {@code Tuple3<groupKey, sortKey, value>} for combine method of group reduce with key selector sorting
*/
public class Tuple3WrappingCollector<IN, K1, K2> implements Collector<IN>, java.io.Serializable {
......
......@@ -22,7 +22,7 @@ import org.apache.flink.api.java.tuple.Tuple2;
import org.apache.flink.util.Collector;
/**
* Needed to wrap tuples to Tuple2<key, value> pairs for combine method of group reduce with key selector function
* Needed to wrap tuples to {@code Tuple2<key, value>} pairs for combine method of group reduce with key selector function
*/
public class TupleWrappingCollector<IN, K> implements Collector<IN>, java.io.Serializable {
......
......@@ -44,8 +44,8 @@ import org.apache.flink.api.common.operators.util.UserCodeWrapper;
* annotation for a map-type function that realizes a simple absolute function,
* use it the following way:
*
* <pre><blockquote>
* \@ConstantFieldsExcept(fields={2})
* <pre>{@code
* {@literal @}ConstantFieldsExcept(fields={2})
* public class MyMapper extends MapFunction
* {
* public void map(Record record, Collector out)
......@@ -56,7 +56,7 @@ import org.apache.flink.api.common.operators.util.UserCodeWrapper;
out.collect(record);
* }
* }
* </blockquote></pre>
* }</pre>
*
* Be aware that some annotations should only be used for functions with as single input
* ({@link MapFunction}, {@link ReduceFunction}) and some only for stubs with two inputs
......
......@@ -32,7 +32,7 @@ import org.apache.flink.core.io.GenericInputSplit;
* The input format checks the exit code of the process to validate whether the process terminated correctly. A list of allowed exit codes can be provided.
* The input format requires ({@link ExternalProcessInputSplit} objects that hold the command to execute.
*
* <b>Attention! </b><br/>
* <b>Attention! </b><br>
* You must take care to read from (and process) both output streams of the process, standard out (stdout) and standard error (stderr).
* Otherwise, the input format might get deadlocked!
*
......
......@@ -29,8 +29,8 @@ import java.util.List;
/**
* Special type information to generate a special AvroTypeInfo for Avro POJOs (implementing SpecificRecordBase, the typed Avro POJOs)
*
* Proceeding: It uses a regular pojo type analysis and replaces all GenericType<CharSequence>
* with a GenericType<avro.Utf8>.
* Proceeding: It uses a regular pojo type analysis and replaces all {@code GenericType<CharSequence>}
* with a {@code GenericType<avro.Utf8>}.
* All other types used by Avro are standard Java types.
* Only strings are represented as CharSequence fields and represented as Utf8 classes at runtime.
* CharSequence is not comparable. To make them nicely usable with field expressions, we replace them here
......
......@@ -107,8 +107,8 @@ public class Serializers {
}
/**
* Register these serializers for using Avro's {@see GenericData.Record} and classes
* implementing {@see org.apache.avro.specific.SpecificRecordBase}
* Register these serializers for using Avro's {@link GenericData.Record} and classes
* implementing {@link org.apache.avro.specific.SpecificRecordBase}
*/
public static void registerGenericAvro(ExecutionConfig reg) {
// Avro POJOs contain java.util.List which have GenericData.Array as their runtime type
......
......@@ -206,7 +206,7 @@ public final class DataSetUtils {
* <p>
* <strong>NOTE:</strong> Sample with fixed size is not as efficient as sample with fraction, use sample with
* fraction unless you need exact precision.
* <p/>
* </p>
*
* @param withReplacement Whether element can be selected more than once.
* @param numSample The expected sample size.
......@@ -225,7 +225,7 @@ public final class DataSetUtils {
* <p>
* <strong>NOTE:</strong> Sample with fixed size is not as efficient as sample with fraction, use sample with
* fraction unless you need exact precision.
* <p/>
* </p>
*
* @param withReplacement Whether element can be selected more than once.
* @param numSample The expected sample size.
......
......@@ -35,7 +35,7 @@ import org.apache.flink.api.java.tuple.Tuple6;
* This program implements the following SQL equivalent:
*
* <p>
* <code><pre>
* <pre>{@code
* SELECT
* c_custkey,
* c_name,
......@@ -60,7 +60,7 @@ import org.apache.flink.api.java.tuple.Tuple6;
* c_acctbal,
* n_name,
* c_address
* </pre></code>
* }</pre>
*
* <p>
* Compared to the original TPC-H query this version does not print
......
......@@ -352,10 +352,10 @@ public class Graph<K, VV, EV> {
* @return An instance of {@link org.apache.flink.graph.GraphCsvReader},
* on which calling methods to specify types of the Vertex ID, Vertex value and Edge value returns a Graph.
*
* @see {@link org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)} and
* {@link org.apache.flink.graph.GraphCsvReader#keyType(Class)}.
* @see org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#keyType(Class)
*/
public static GraphCsvReader fromCsvReader(String verticesPath, String edgesPath, ExecutionEnvironment context) {
return new GraphCsvReader(verticesPath, edgesPath, context);
......@@ -369,10 +369,10 @@ public class Graph<K, VV, EV> {
* @return An instance of {@link org.apache.flink.graph.GraphCsvReader},
* on which calling methods to specify types of the Vertex ID, Vertex value and Edge value returns a Graph.
*
* @see {@link org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)} and
* {@link org.apache.flink.graph.GraphCsvReader#keyType(Class)}.
* @see org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#keyType(Class)
*/
public static GraphCsvReader fromCsvReader(String edgesPath, ExecutionEnvironment context) {
return new GraphCsvReader(edgesPath, context);
......@@ -389,10 +389,10 @@ public class Graph<K, VV, EV> {
* @return An instance of {@link org.apache.flink.graph.GraphCsvReader},
* on which calling methods to specify types of the Vertex ID, Vertex Value and Edge value returns a Graph.
*
* @see {@link org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)},
* {@link org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)} and
* {@link org.apache.flink.graph.GraphCsvReader#keyType(Class)}.
* @see org.apache.flink.graph.GraphCsvReader#types(Class, Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#vertexTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#edgeTypes(Class, Class)
* @see org.apache.flink.graph.GraphCsvReader#keyType(Class)
*/
public static <K, VV> GraphCsvReader fromCsvReader(String edgesPath,
final MapFunction<K, VV> vertexValueInitializer, ExecutionEnvironment context) {
......@@ -821,7 +821,7 @@ public class Graph<K, VV, EV> {
/**
* Return the out-degree of all vertices in the graph
*
* @return A DataSet of Tuple2<vertexId, outDegree>
* @return A DataSet of {@code Tuple2<vertexId, outDegree>}
*/
public DataSet<Tuple2<K, Long>> outDegrees() {
......@@ -851,7 +851,7 @@ public class Graph<K, VV, EV> {
/**
* Return the in-degree of all vertices in the graph
*
* @return A DataSet of Tuple2<vertexId, inDegree>
* @return A DataSet of {@code Tuple2<vertexId, inDegree>}
*/
public DataSet<Tuple2<K, Long>> inDegrees() {
......@@ -861,7 +861,7 @@ public class Graph<K, VV, EV> {
/**
* Return the degree of all vertices in the graph
*
* @return A DataSet of Tuple2<vertexId, degree>
* @return A DataSet of {@code Tuple2<vertexId, degree>}
*/
public DataSet<Tuple2<K, Long>> getDegrees() {
return outDegrees().union(inDegrees()).groupBy(0).sum(1);
......
......@@ -35,7 +35,7 @@ import java.io.Serializable;
/**
* This example shows how to use Gelly's {@link Graph#getTriplets()} and
* {@link Graph#joinWithEdges(DataSet, MapFunction)} methods.
* {@link Graph#joinWithEdges(DataSet, EdgeJoinFunction)} methods.
*
* Given a directed, unweighted graph, with vertex values representing points in a plan,
* return a weighted graph where the edge weights are equal to the Euclidean distance between the
......@@ -51,7 +51,6 @@ import java.io.Serializable;
* Edges themselves are separated by newlines.
* For example: <code>1,2\n1,3\n</code> defines two edges 1-2 and 1-3.
* </ul>
* </p>
*
* Usage <code>EuclideanGraphWeighing &lt;vertex path&gt; &lt;edge path&gt; &lt;result path&gt;</code><br>
* If no parameters are provided, the program is run with default data from
......
......@@ -42,7 +42,7 @@ import org.apache.flink.types.NullValue;
*
* The input file is expected to contain one edge per line,
* with long IDs and no values, in the following format:
* "<sourceVertexID>\t<targetVertexID>".
* "&lt;sourceVertexID&gt;\t&lt;targetVertexID&gt;".
* If no arguments are provided, the example runs with a random graph of 100 vertices.
*
*/
......
......@@ -52,7 +52,7 @@ import org.apache.flink.graph.spargel.VertexUpdateFunction;
* The edge is simply removed from the graph.
* - If the removed edge is an SP-edge, then all nodes, whose shortest path contains the removed edge,
* potentially require re-computation.
* When the edge <u, v> is removed, v checks if it has another out-going SP-edge.
* When the edge {@code <u, v>} is removed, v checks if it has another out-going SP-edge.
* If yes, no further computation is required.
* If v has no other out-going SP-edge, it invalidates its current value, by setting it to INF.
* Then, it informs all its SP-in-neighbors by sending them an INVALIDATE message.
......
......@@ -45,10 +45,10 @@ import org.apache.flink.util.Collector;
/**
* This example demonstrates how to mix the DataSet Flink API with the Gelly API.
* The input is a set <userId - songId - playCount> triplets and
* The input is a set &lt;userId - songId - playCount&gt; triplets and
* a set of bad records, i.e. song ids that should not be trusted.
* Initially, we use the DataSet API to filter out the bad records.
* Then, we use Gelly to create a user -> song weighted bipartite graph and compute
* Then, we use Gelly to create a user -&gt; song weighted bipartite graph and compute
* the top song (most listened) per user.
* Then, we use the DataSet API again, to create a user-user similarity graph,
* based on common songs, where users that are listeners of the same song
......@@ -58,11 +58,11 @@ import org.apache.flink.util.Collector;
* the similarity graph.
*
* The triplets input is expected to be given as one triplet per line,
* in the following format: "<userID>\t<songID>\t<playcount>".
* in the following format: "&lt;userID&gt;\t&lt;songID&gt;\t&lt;playcount&gt;".
*
* The mismatches input file is expected to contain one mismatch record per line,
* in the following format:
* "ERROR: <songID trackID> song_title"
* "ERROR: &lt;songID trackID&gt; song_title"
*
* If no arguments are provided, the example runs with default data from {@link MusicProfilesData}.
*/
......
......@@ -45,7 +45,7 @@ public abstract class ApplyFunction<K, VV, M> implements Serializable {
/**
* Retrieves the number of vertices in the graph.
* @return the number of vertices if the {@link IterationConfigurationion#setOptNumVertices(boolean)}
* @return the number of vertices if the {@link org.apache.flink.graph.IterationConfiguration#setOptNumVertices(boolean)}
* option has been set; -1 otherwise.
*/
public long getNumberOfVertices() {
......@@ -80,15 +80,11 @@ public abstract class ApplyFunction<K, VV, M> implements Serializable {
/**
* This method is executed once per superstep before the vertex update function is invoked for each vertex.
*
* @throws Exception Exceptions in the pre-superstep phase cause the superstep to fail.
*/
public void preSuperstep() {}
/**
* This method is executed once per superstep after the vertex update function has been invoked for each vertex.
*
* @throws Exception Exceptions in the post-superstep phase cause the superstep to fail.
*/
public void postSuperstep() {}
......
......@@ -43,7 +43,7 @@ public abstract class GatherFunction<VV, EV, M> implements Serializable {
/**
* Retrieves the number of vertices in the graph.
* @return the number of vertices if the {@link IterationConfigurationion#setOptNumVertices(boolean)}
* @return the number of vertices if the {@link org.apache.flink.graph.IterationConfiguration#setOptNumVertices(boolean)}
* option has been set; -1 otherwise.
*/
public long getNumberOfVertices() {
......@@ -69,15 +69,11 @@ public abstract class GatherFunction<VV, EV, M> implements Serializable {
/**
* This method is executed once per superstep before the vertex update function is invoked for each vertex.
*
* @throws Exception Exceptions in the pre-superstep phase cause the superstep to fail.
*/
public void preSuperstep() {}
/**
* This method is executed once per superstep after the vertex update function has been invoked for each vertex.
*
* @throws Exception Exceptions in the post-superstep phase cause the superstep to fail.
*/
public void postSuperstep() {}
......
......@@ -21,8 +21,8 @@ package org.apache.flink.graph.gsa;
import org.apache.flink.api.java.tuple.Tuple2;
/**
* This class represents a <sourceVertex, edge> pair
* This is a wrapper around Tuple2<VV, EV> for convenience in the GatherFunction
* This class represents a {@code <sourceVertex, edge>} pair
* This is a wrapper around {@code Tuple2<VV, EV>} for convenience in the GatherFunction
* @param <VV> the vertex value type
* @param <EV> the edge value type
*/
......
......@@ -43,7 +43,7 @@ public abstract class SumFunction<VV, EV, M> implements Serializable {
/**
* Retrieves the number of vertices in the graph.
* @return the number of vertices if the {@link IterationConfigurationion#setOptNumVertices(boolean)}
* @return the number of vertices if the {@link org.apache.flink.graph.IterationConfiguration#setOptNumVertices(boolean)}
* option has been set; -1 otherwise.
*/
public long getNumberOfVertices() {
......@@ -69,15 +69,11 @@ public abstract class SumFunction<VV, EV, M> implements Serializable {
/**
* This method is executed once per superstep before the vertex update function is invoked for each vertex.
*
* @throws Exception Exceptions in the pre-superstep phase cause the superstep to fail.
*/
public void preSuperstep() {}
/**
* This method is executed once per superstep after the vertex update function has been invoked for each vertex.
*
* @throws Exception Exceptions in the post-superstep phase cause the superstep to fail.
*/
public void postSuperstep() {}
......
......@@ -41,7 +41,7 @@ import org.apache.flink.types.NullValue;
*
* The result is a DataSet of vertices, where the vertex value corresponds to the assigned component ID.
*
* @see {@link org.apache.flink.graph.library.GSAConnectedComponents}
* @see org.apache.flink.graph.library.GSAConnectedComponents
*/
@SuppressWarnings("serial")
public class ConnectedComponents<K, EV> implements GraphAlgorithm<K, Long, EV, DataSet<Vertex<K, Long>>> {
......
......@@ -34,7 +34,7 @@ import org.apache.flink.types.NullValue;
* This implementation assumes that the vertices of the input Graph are initialized with unique, Long component IDs.
* The result is a DataSet of vertices, where the vertex value corresponds to the assigned component ID.
*
* @see {@link org.apache.flink.graph.library.ConnectedComponents}
* @see org.apache.flink.graph.library.ConnectedComponents
*/
public class GSAConnectedComponents<K, EV> implements GraphAlgorithm<K, Long, EV, DataSet<Vertex<K, Long>>> {
......
......@@ -61,7 +61,7 @@ public class GSAPageRank<K> implements GraphAlgorithm<K, Double, Double, DataSet
/**
* Creates an instance of the GSA PageRank algorithm.
* If the number of vertices of the input graph is known,
* use the {@link GSAPageRank#GSAPageRank(double, long)} constructor instead.
* use the {@link GSAPageRank#GSAPageRank(double, int)} constructor instead.
*
* The implementation assumes that each page has at least one incoming and one outgoing link.
*
......
......@@ -62,7 +62,7 @@ public class PageRank<K> implements GraphAlgorithm<K, Double, Double, DataSet<Ve
/**
* Creates an instance of the PageRank algorithm.
* If the number of vertices of the input graph is known,
* use the {@link PageRank#PageRank(double, long)} constructor instead.
* use the {@link PageRank#PageRank(double, int)} constructor instead.
*
* The implementation assumes that each page has at least one incoming and one outgoing link.
*
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册