提交 1190f3b1 编写于 作者: M mjsax

[Storm Compatibility] Updated README.md and documenation

上级 b2aa1d9e
......@@ -52,9 +52,12 @@ Add the following dependency to your `pom.xml` if you want to execute Storm code
**Please note**: Do not add `storm-core` as a dependency. It is already included via `flink-storm`.
**Please note**: `flink-storm` is not part of the provided binary Flink distribution.
Thus, you need to include `flink-storm` classes (and their dependencies) in your program jar that is submitted to Flink's JobManager.
Thus, you need to include `flink-storm` classes (and their dependencies) in your program jar (also called ueber-jar or fat-jar) that is submitted to Flink's JobManager.
See *WordCount Storm* within `flink-storm-examples/pom.xml` for an example how to package a jar correctly.
If you want to avoid large ueber-jars, you can manually copy `storm-core-0.9.4.jar`, `json-simple-1.1.jar` and `flink-storm-{{site.version}}.jar` into Flink's `lib/` folder of each cluster node (*before* the cluster is started).
For this case, it is sufficient to include only your own Spout and Bolt classes (and their internal dependencies) into the program jar.
# Execute Storm Topologies
Flink provides a Storm compatible API (`org.apache.flink.storm.api`) that offers replacements for the following classes:
......@@ -80,13 +83,15 @@ builder.setBolt("sink", new BoltFileSink(outputFilePath)).shuffleGrouping("count
Config conf = new Config();
if(runLocal) { // submit to test cluster
FlinkLocalCluster cluster = new FlinkLocalCluster(); // replaces: LocalCluster cluster = new LocalCluster();
// replaces: LocalCluster cluster = new LocalCluster();
FlinkLocalCluster cluster = new FlinkLocalCluster();
cluster.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
} else { // submit to remote cluster
// optional
// conf.put(Config.NIMBUS_HOST, "remoteHost");
// conf.put(Config.NIMBUS_THRIFT_PORT, 6123);
FlinkSubmitter.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder)); // replaces: StormSubmitter.submitTopology(topologyId, conf, builder.createTopology());
// replaces: StormSubmitter.submitTopology(topologyId, conf, builder.createTopology());
FlinkSubmitter.submitTopology("WordCount", conf, FlinkTopology.createTopology(builder));
}
~~~
</div>
......
# flink-storm
`flink-storm` is compatibility layer for Apache Storm and allows to embed Spouts or Bolts unmodified within a regular Flink streaming program (`SpoutWrapper` and `BoltWrapper`).
Additionally, a whole Storm topology can be submitted to Flink (see `FlinkTopologyBuilder`, `FlinkLocalCluster`, and `FlinkSubmitter`).
Additionally, a whole Storm topology can be submitted to Flink (see `FlinkLocalCluster`, and `FlinkSubmitter`).
Only a few minor changes to the original submitting code are required.
The code that builds the topology itself, can be reused unmodified. See `flink-storm-examples` for a simple word-count example.
**Please note**: Do not add `storm-core` as a dependency. It is already included via `flink-storm`.
The following Storm features are not (yet/fully) supported by the compatibility layer right now:
* tuple meta information
* no fault-tolerance guarantees (ie, calls to `ack()`/`fail()` and anchoring is ignored)
......
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册