提交 0b0b8a1d 编写于 作者: I Igal Levy

Merge branch 'master' into igalDruid

......@@ -30,4 +30,4 @@ echo "For examples, see: "
echo " "
ls -1 examples/*/*sh
echo " "
echo "See also http://druid.io/docs/0.6.73"
echo "See also http://druid.io/docs/0.6.81"
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -54,10 +54,8 @@ The interval is the [ISO8601 interval](http://en.wikipedia.org/wiki/ISO_8601#Tim
"gran": "day"
},
"pathSpec": {
"type": "granularity",
"dataGranularity": "hour",
"inputPath": "s3n:\/\/billy-bucket\/the\/data\/is\/here",
"filePattern": ".*"
"type": "static",
"paths" : "example/path/data.gz,example/path/moredata.gz"
},
"rollupSpec": {
"aggs": [
......@@ -116,6 +114,20 @@ The interval is the [ISO8601 interval](http://en.wikipedia.org/wiki/ISO_8601#Tim
There are multiple types of path specification:
##### `static`
Is a type of data loader where a static path to where the data files are located is passed.
|property|description|required?|
|--------|-----------|---------|
|paths|A String of input paths indicating where the raw data is located.|yes|
For example, using the static input paths:
```
"paths" : "s3n://billy-bucket/the/data/is/here/data.gz, s3n://billy-bucket/the/data/is/here/moredata.gz, s3n://billy-bucket/the/data/is/here/evenmoredata.gz"
```
##### `granularity`
Is a type of data loader that expects data to be laid out in a specific path format. Specifically, it expects it to be segregated by day in this directory format `y=XXXX/m=XX/d=XX/H=XX/M=XX/S=XX` (dates are represented by lowercase, time is represented by uppercase).
......
......@@ -19,13 +19,13 @@ Clone Druid and build it:
git clone https://github.com/metamx/druid.git druid
cd druid
git fetch --tags
git checkout druid-0.6.73
git checkout druid-0.6.81
./build.sh
```
### Downloading the DSK (Druid Standalone Kit)
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz) a stand-alone tarball and run it:
[Download](http://static.druid.io/artifacts/releases/druid-services-0.6.81-bin.tar.gz) a stand-alone tarball and run it:
``` bash
tar -xzf druid-services-0.X.X-bin.tar.gz
......
......@@ -66,7 +66,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/indexer
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.81"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
......@@ -115,7 +115,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/worker
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.81","io.druid.extensions:druid-kafka-seven:0.6.81"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
......
......@@ -21,6 +21,10 @@ druid.storage.bucket=druid
druid.storage.baseKey=sample
```
## How do I get HDFS to work?
Make sure to include the `druid-hdfs-storage` module as one of your extensions and set `druid.storage.type=hdfs`.
## I don't see my Druid segments on my historical nodes
You can check the coordinator console located at `<COORDINATOR_IP>:<PORT>/cluster.html`. Make sure that your segments have actually loaded on [historical nodes](Historical.html). If your segments are not present, check the coordinator logs for messages about capacity of replication errors. One reason that segments are not downloaded is because historical nodes have maxSizes that are too small, making them incapable of downloading more data. You can change that with (for example):
......@@ -31,7 +35,7 @@ You can check the coordinator console located at `<COORDINATOR_IP>:<PORT>/cluste
## My queries are returning empty results
You can check `<BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE>` for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists.
You can check `<BROKER_IP>:<PORT>/druid/v2/datasources/<YOUR_DATASOURCE>?interval=0/3000` for the dimensions and metrics that have been created for your datasource. Make sure that the name of the aggregators you use in your query match one of these metrics. Also make sure that the query interval you specify match a valid time range where data exists. Note: the broker endpoint will only return valid results on historical segments.
## More information
......
......@@ -7,7 +7,7 @@ The plumber handles generated segments both while they are being generated and w
|Field|Type|Description|Required|
|-----|----|-----------|--------|
|type|String|Specifies the type of plumber. Each value will have its own configuration schema. Plumbers packaged with Druid are described below.|yes|
|type|String|Specifies the type of plumber. Each value will have its own configuration schema. Plumbers packaged with Druid are described below. The default type is "realtime".|yes|
The following can be configured on the plumber:
......@@ -16,12 +16,11 @@ The following can be configured on the plumber:
* `maxPendingPersists` is how many persists a plumber can do concurrently without starting to block.
* `segmentGranularity` specifies the granularity of the segment, or the amount of time a segment will represent.
* `rejectionPolicy` controls how data sets the data acceptance policy for creating and handing off segments. The following policies are available:
* `serverTime` &ndash; The default policy, it is optimal for current data that is generated and ingested in real time. Uses `windowPeriod` to accept only those events that are inside the window looking forward and back.
* `serverTime` &ndash; The recommended policy for "current time" data, it is optimal for current data that is generated and ingested in real time. Uses `windowPeriod` to accept only those events that are inside the window looking forward and back.
* `messageTime` &ndash; Can be used for non-"current time" as long as that data is relatively in sequence. Events are rejected if they are less than `windowPeriod` from the event with the latest timestamp. Hand off only occurs if an event is seen after the segmentGranularity and `windowPeriod`.
* `none` &ndash; Never hands off data unless shutdown() is called on the configured firehose.
* `test` &ndash; Useful for testing that handoff is working, *not useful in terms of data integrity*. It uses the sum of `segmentGranularity` plus `windowPeriod` as a window.
Available Plumbers
------------------
......
......@@ -27,7 +27,7 @@ druid.host=localhost
druid.service=realtime
druid.port=8083
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-kafka-seven:0.6.81"]
druid.zk.service.host=localhost
......@@ -76,7 +76,7 @@ druid.host=#{IP_ADDR}:8080
druid.port=8080
druid.service=druid/prod/realtime
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.81","io.druid.extensions:druid-kafka-seven:0.6.81"]
druid.zk.service.host=#{ZK_IPs}
druid.zk.paths.base=/druid/prod
......
......@@ -118,10 +118,10 @@ The Plumber handles generated segments both while they are being generated and w
* `windowPeriod` is the amount of lag time to allow events. The example configures a 10 minute window, meaning that any event more than 10 minutes ago will be thrown away and not included in the segment generated by the realtime server.
* `segmentGranularity` specifies the granularity of the segment, or the amount of time a segment will represent.
* `basePersistDirectory` is the directory to put things that need persistence. The plumber is responsible for the actual intermediate persists and this tells it where to store those persists.
* `rejectionPolicy` determines what events are rejected upon ingestion.
See [Plumber](Plumber.html) for a fuller discussion of Plumber configuration.
Constraints
-----------
......
......@@ -49,7 +49,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
### Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz). Download this file to a directory of your choosing.
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.81-bin.tar.gz). Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
......@@ -60,7 +60,7 @@ tar -zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.73
cd druid-services-0.6.81
```
You should see a bunch of files:
......
......@@ -13,7 +13,7 @@ In this tutorial, we will set up other types of Druid nodes and external depende
If you followed the first tutorial, you should already have Druid downloaded. If not, let's go back and do that first.
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz)
You can download the latest version of druid [here](http://static.druid.io/artifacts/releases/druid-services-0.6.81-bin.tar.gz)
and untar the contents within by issuing:
......@@ -149,7 +149,7 @@ druid.port=8081
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.81"]
# Dummy read only AWS account (used to download example data)
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
......@@ -240,7 +240,7 @@ druid.port=8083
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.81","io.druid.extensions:druid-kafka-seven:0.6.81"]
# Change this config to db to hand off to the rest of the Druid cluster
druid.publish.type=noop
......
......@@ -37,7 +37,7 @@ There are two ways to setup Druid: download a tarball, or [Build From Source](Bu
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz)
We've built a tarball that contains everything you'll need. You'll find it [here](http://static.druid.io/artifacts/releases/druid-services-0.6.81-bin.tar.gz)
Download this file to a directory of your choosing.
You can extract the awesomeness within by issuing:
......@@ -48,7 +48,7 @@ tar zxvf druid-services-*-bin.tar.gz
Not too lost so far right? That's great! If you cd into the directory:
```
cd druid-services-0.6.73
cd druid-services-0.6.81
```
You should see a bunch of files:
......
......@@ -9,7 +9,7 @@ There are two ways to setup Druid: download a tarball, or build it from source.
h3. Download a Tarball
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.73-bin.tar.gz.
We've built a tarball that contains everything you'll need. You'll find it "here":http://static.druid.io/artifacts/releases/druid-services-0.6.81-bin.tar.gz.
Download this bad boy to a directory of your choosing.
You can extract the awesomeness within by issuing:
......
......@@ -4,7 +4,7 @@ druid.port=8081
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-s3-extensions:0.6.81"]
# Dummy read only AWS account (used to download example data)
druid.s3.secretKey=QyyfVZ7llSiRg6Qcrql1eEUG7buFpAK6T6engr1b
......
......@@ -4,7 +4,7 @@ druid.port=8083
druid.zk.service.host=localhost
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.73","io.druid.extensions:druid-kafka-seven:0.6.73","io.druid.extensions:druid-rabbitmq:0.6.73"]
druid.extensions.coordinates=["io.druid.extensions:druid-examples:0.6.81","io.druid.extensions:druid-kafka-seven:0.6.81","io.druid.extensions:druid-rabbitmq:0.6.81"]
# Change this config to db to hand off to the rest of the Druid cluster
druid.publish.type=noop
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -23,14 +23,14 @@
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<packaging>pom</packaging>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
<name>druid</name>
<description>druid</description>
<scm>
<connection>scm:git:ssh://git@github.com/metamx/druid.git</connection>
<developerConnection>scm:git:ssh://git@github.com/metamx/druid.git</developerConnection>
<url>http://www.github.com/metamx/druid</url>
<tag>druid-0.6.73-SNAPSHOT</tag>
<tag>druid-0.6.81-SNAPSHOT</tag>
</scm>
<prerequisites>
......@@ -313,17 +313,17 @@
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-server</artifactId>
<version>8.1.11.v20130520</version>
<version>9.1.3.v20140225</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-servlet</artifactId>
<version>8.1.11.v20130520</version>
<version>9.1.3.v20140225</version>
</dependency>
<dependency>
<groupId>org.eclipse.jetty</groupId>
<artifactId>jetty-servlets</artifactId>
<version>8.1.11.v20130520</version>
<version>9.1.3.v20140225</version>
</dependency>
<dependency>
<groupId>joda-time</groupId>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -25,6 +25,8 @@ import com.metamx.common.guava.Yielder;
import com.metamx.common.guava.YieldingAccumulator;
import com.metamx.common.guava.YieldingSequenceBase;
import java.io.Closeable;
/**
*/
public class ReferenceCountingSequence<T> extends YieldingSequenceBase<T>
......@@ -43,6 +45,7 @@ public class ReferenceCountingSequence<T> extends YieldingSequenceBase<T>
OutType initValue, YieldingAccumulator<OutType, T> accumulator
)
{
return new ResourceClosingYielder<OutType>(baseSequence.toYielder(initValue, accumulator), segment.increment());
final Closeable closeable = segment.increment();
return new ResourceClosingYielder<OutType>(baseSequence.toYielder(initValue, accumulator), closeable);
}
}
\ No newline at end of file
......@@ -37,11 +37,11 @@ public class HyperLogLogCollectorTest
{
private final HashFunction fn = Hashing.murmur3_128();
private final Random random = new Random();
@Test
public void testFolding() throws Exception
{
final Random random = new Random(0);
final int[] numValsToCheck = {10, 20, 50, 100, 1000, 2000};
for (int numThings : numValsToCheck) {
HyperLogLogCollector allCombined = HyperLogLogCollector.makeLatestCollector();
......@@ -154,6 +154,7 @@ public class HyperLogLogCollectorTest
@Test
public void testFoldingByteBuffers() throws Exception
{
final Random random = new Random(0);
final int[] numValsToCheck = {10, 20, 50, 100, 1000, 2000};
for (int numThings : numValsToCheck) {
HyperLogLogCollector allCombined = HyperLogLogCollector.makeLatestCollector();
......@@ -186,6 +187,7 @@ public class HyperLogLogCollectorTest
@Test
public void testFoldingReadOnlyByteBuffers() throws Exception
{
final Random random = new Random(0);
final int[] numValsToCheck = {10, 20, 50, 100, 1000, 2000};
for (int numThings : numValsToCheck) {
HyperLogLogCollector allCombined = HyperLogLogCollector.makeLatestCollector();
......@@ -221,6 +223,7 @@ public class HyperLogLogCollectorTest
@Test
public void testFoldingReadOnlyByteBuffersWithArbitraryPosition() throws Exception
{
final Random random = new Random(0);
final int[] numValsToCheck = {10, 20, 50, 100, 1000, 2000};
for (int numThings : numValsToCheck) {
HyperLogLogCollector allCombined = HyperLogLogCollector.makeLatestCollector();
......@@ -482,9 +485,11 @@ public class HyperLogLogCollectorTest
return retVal;
}
//@Test // This test can help when finding potential combinations that are weird, but it's non-deterministic
@Ignore @Test // This test can help when finding potential combinations that are weird, but it's non-deterministic
public void testFoldingwithDifferentOffsets() throws Exception
{
// final Random random = new Random(37); // this seed will cause this test to fail because of slightly larger errors
final Random random = new Random(0);
for (int j = 0; j < 10; j++) {
HyperLogLogCollector smallVals = HyperLogLogCollector.makeLatestCollector();
HyperLogLogCollector bigVals = HyperLogLogCollector.makeLatestCollector();
......@@ -511,9 +516,10 @@ public class HyperLogLogCollectorTest
}
}
//@Test
@Ignore @Test
public void testFoldingwithDifferentOffsets2() throws Exception
{
final Random random = new Random(0);
MessageDigest md = MessageDigest.getInstance("SHA-1");
for (int j = 0; j < 1; j++) {
......@@ -632,6 +638,7 @@ public class HyperLogLogCollectorTest
@Test
public void testSparseEstimation() throws Exception
{
final Random random = new Random(0);
HyperLogLogCollector collector = HyperLogLogCollector.makeLatestCollector();
for (int i = 0; i < 100; ++i) {
......@@ -742,6 +749,54 @@ public class HyperLogLogCollectorTest
}
}
@Test
public void testMaxOverflow() {
HyperLogLogCollector collector = HyperLogLogCollector.makeLatestCollector();
collector.add((short)23, (byte)16);
Assert.assertEquals(23, collector.getMaxOverflowRegister());
Assert.assertEquals(16, collector.getMaxOverflowValue());
Assert.assertEquals(0, collector.getRegisterOffset());
Assert.assertEquals(0, collector.getNumNonZeroRegisters());
collector.add((short)56, (byte)17);
Assert.assertEquals(56, collector.getMaxOverflowRegister());
Assert.assertEquals(17, collector.getMaxOverflowValue());
collector.add((short)43, (byte)16);
Assert.assertEquals(56, collector.getMaxOverflowRegister());
Assert.assertEquals(17, collector.getMaxOverflowValue());
Assert.assertEquals(0, collector.getRegisterOffset());
Assert.assertEquals(0, collector.getNumNonZeroRegisters());
}
@Test
public void testMergeMaxOverflow() {
// no offset
HyperLogLogCollector collector = HyperLogLogCollector.makeLatestCollector();
collector.add((short)23, (byte)16);
HyperLogLogCollector other = HyperLogLogCollector.makeLatestCollector();
collector.add((short)56, (byte)17);
collector.fold(other);
Assert.assertEquals(56, collector.getMaxOverflowRegister());
Assert.assertEquals(17, collector.getMaxOverflowValue());
// different offsets
// fill up all the buckets so we reach a registerOffset of 49
collector = HyperLogLogCollector.makeLatestCollector();
fillBuckets(collector, (byte) 0, (byte) 49);
collector.add((short)23, (byte)65);
other = HyperLogLogCollector.makeLatestCollector();
fillBuckets(other, (byte) 0, (byte) 43);
other.add((short)47, (byte)67);
collector.fold(other);
Assert.assertEquals(47, collector.getMaxOverflowRegister());
Assert.assertEquals(67, collector.getMaxOverflowValue());
}
private static void fillBuckets(HyperLogLogCollector collector, byte startOffset, byte endOffset)
{
......@@ -756,7 +811,7 @@ public class HyperLogLogCollectorTest
}
// Provides a nice printout of error rates as a function of cardinality
//@Test
@Ignore @Test
public void showErrorRate() throws Exception
{
HashFunction fn = Hashing.murmur3_128();
......
all : druid
all : druid sigmod zip
druid : druid.pdf
sigmod : sgmd0658-yang.pdf
sigmod : modii658-yang.pdf
zip : sgmd0658-yang.zip
zip : modii658-yang.zip
%.zip : %.pdf
@rm -f dummy.ps
......
此差异已折叠。
......@@ -9,7 +9,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -28,7 +28,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -20,6 +20,8 @@
package io.druid.client;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.collect.Lists;
import com.metamx.common.guava.Accumulator;
import com.metamx.common.guava.Sequence;
import com.metamx.common.guava.Sequences;
import io.druid.client.cache.Cache;
......@@ -72,22 +74,25 @@ public class CachePopulatingQueryRunner<T> implements QueryRunner<T>
&& strategy != null
&& cacheConfig.isPopulateCache()
// historical only populates distributed cache since the cache lookups are done at broker.
&& !(cache instanceof MapCache) ;
Sequence<T> results = base.run(query);
&& !(cache instanceof MapCache);
if (populateCache) {
Sequence<T> results = base.run(query);
Cache.NamedKey key = CacheUtil.computeSegmentCacheKey(
segmentIdentifier,
segmentDescriptor,
strategy.computeCacheKey(query)
);
ArrayList<T> resultAsList = Sequences.toList(results, new ArrayList<T>());
CacheUtil.populate(
cache,
mapper,
key,
Sequences.toList(Sequences.map(results, strategy.prepareForCache()), new ArrayList())
Lists.transform(resultAsList, strategy.prepareForCache())
);
return Sequences.simple(resultAsList);
} else {
return base.run(query);
}
return results;
}
}
......@@ -17,66 +17,97 @@
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.server.router;
package io.druid.client;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.inject.Inject;
import com.fasterxml.jackson.dataformat.smile.SmileFactory;
import com.google.common.base.Throwables;
import com.google.common.util.concurrent.FutureCallback;
import com.google.common.util.concurrent.Futures;
import com.google.common.util.concurrent.ListenableFuture;
import com.metamx.common.logger.Logger;
import com.metamx.http.client.HttpClient;
import io.druid.curator.discovery.ServerDiscoveryFactory;
import com.metamx.http.client.response.HttpResponseHandler;
import io.druid.guice.annotations.Global;
import io.druid.query.Query;
import io.druid.query.QueryRunner;
import io.druid.query.QuerySegmentWalker;
import io.druid.query.QueryToolChestWarehouse;
import io.druid.query.SegmentDescriptor;
import org.joda.time.Interval;
import org.jboss.netty.handler.codec.http.HttpHeaders;
import javax.inject.Inject;
import java.io.IOException;
import java.net.URL;
import java.util.concurrent.atomic.AtomicInteger;
/**
*/
public class RouterQuerySegmentWalker implements QuerySegmentWalker
public class RoutingDruidClient<IntermediateType, FinalType>
{
private final QueryToolChestWarehouse warehouse;
private static final Logger log = new Logger(RoutingDruidClient.class);
private final ObjectMapper objectMapper;
private final HttpClient httpClient;
private final BrokerSelector brokerSelector;
private final TierConfig tierConfig;
private final AtomicInteger openConnections;
private final boolean isSmile;
@Inject
public RouterQuerySegmentWalker(
QueryToolChestWarehouse warehouse,
public RoutingDruidClient(
ObjectMapper objectMapper,
@Global HttpClient httpClient,
BrokerSelector brokerSelector,
TierConfig tierConfig
@Global HttpClient httpClient
)
{
this.warehouse = warehouse;
this.objectMapper = objectMapper;
this.httpClient = httpClient;
this.brokerSelector = brokerSelector;
this.tierConfig = tierConfig;
}
@Override
public <T> QueryRunner<T> getQueryRunnerForIntervals(Query<T> query, Iterable<Interval> intervals)
{
return makeRunner();
this.isSmile = this.objectMapper.getFactory() instanceof SmileFactory;
this.openConnections = new AtomicInteger();
}
@Override
public <T> QueryRunner<T> getQueryRunnerForSegments(Query<T> query, Iterable<SegmentDescriptor> specs)
public int getNumOpenConnections()
{
return makeRunner();
return openConnections.get();
}
private <T> QueryRunner<T> makeRunner()
public ListenableFuture<FinalType> run(
String host,
Query query,
HttpResponseHandler<IntermediateType, FinalType> responseHandler
)
{
return new TierAwareQueryRunner<T>(
warehouse,
objectMapper,
httpClient,
brokerSelector,
tierConfig
);
final ListenableFuture<FinalType> future;
final String url = String.format("http://%s/druid/v2/", host);
try {
log.debug("Querying url[%s]", url);
future = httpClient
.post(new URL(url))
.setContent(objectMapper.writeValueAsBytes(query))
.setHeader(HttpHeaders.Names.CONTENT_TYPE, isSmile ? "application/smile" : "application/json")
.go(responseHandler);
openConnections.getAndIncrement();
Futures.addCallback(
future,
new FutureCallback<FinalType>()
{
@Override
public void onSuccess(FinalType result)
{
openConnections.getAndDecrement();
}
@Override
public void onFailure(Throwable t)
{
openConnections.getAndDecrement();
}
}
);
}
catch (IOException e) {
throw Throwables.propagate(e);
}
return future;
}
}
......@@ -39,6 +39,8 @@ public interface Cache
public CacheStats getStats();
public boolean isLocal();
public class NamedKey
{
final public String namespace;
......
......@@ -149,4 +149,8 @@ public class MapCache implements Cache
retVal.rewind();
return retVal;
}
public boolean isLocal() {
return true;
}
}
......@@ -278,4 +278,8 @@ public class MemcachedCache implements Cache
// hash keys to keep things under 250 characters for memcached
return memcachedPrefix + ":" + DigestUtils.sha1Hex(key.namespace) + ":" + DigestUtils.sha1Hex(key.key);
}
public boolean isLocal() {
return false;
}
}
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.client.selector;
import com.metamx.common.Pair;
import io.druid.curator.discovery.ServerDiscoverySelector;
import io.druid.query.Query;
/**
*/
public interface HostSelector<T>
{
public String getDefaultServiceName();
public Pair<String, ServerDiscoverySelector> select(Query<T> query);
}
......@@ -31,17 +31,15 @@ import com.metamx.common.logger.Logger;
import com.metamx.emitter.service.ServiceEmitter;
import com.metamx.emitter.service.ServiceMetricEvent;
import io.druid.collections.StupidPool;
import io.druid.concurrent.Execs;
import io.druid.guice.annotations.Global;
import io.druid.guice.annotations.Processing;
import io.druid.query.MetricsEmittingExecutorService;
import io.druid.query.PrioritizedExecutorService;
import io.druid.server.DruidProcessingConfig;
import io.druid.server.VMUtils;
import java.lang.reflect.InvocationTargetException;
import java.nio.ByteBuffer;
import java.util.concurrent.ExecutorService;
import java.util.concurrent.Executors;
import java.util.concurrent.atomic.AtomicLong;
/**
......@@ -82,37 +80,26 @@ public class DruidProcessingModule implements Module
public StupidPool<ByteBuffer> getIntermediateResultsPool(DruidProcessingConfig config)
{
try {
Class<?> vmClass = Class.forName("sun.misc.VM");
Object maxDirectMemoryObj = vmClass.getMethod("maxDirectMemory").invoke(null);
long maxDirectMemory = VMUtils.getMaxDirectMemory();
if (maxDirectMemoryObj == null || !(maxDirectMemoryObj instanceof Number)) {
log.info("Cannot determine maxDirectMemory from[%s]", maxDirectMemoryObj);
} else {
long maxDirectMemory = ((Number) maxDirectMemoryObj).longValue();
final long memoryNeeded = (long) config.intermediateComputeSizeBytes() * (config.getNumThreads() + 1);
if (maxDirectMemory < memoryNeeded) {
throw new ProvisionException(
String.format(
"Not enough direct memory. Please adjust -XX:MaxDirectMemorySize or druid.computation.buffer.size: "
+ "maxDirectMemory[%,d], memoryNeeded[%,d], druid.computation.buffer.size[%,d], numThreads[%,d]",
maxDirectMemory, memoryNeeded, config.intermediateComputeSizeBytes(), config.getNumThreads()
)
);
}
final long memoryNeeded = (long) config.intermediateComputeSizeBytes() * (config.getNumThreads() + 1);
if (maxDirectMemory < memoryNeeded) {
throw new ProvisionException(
String.format(
"Not enough direct memory. Please adjust -XX:MaxDirectMemorySize, druid.computation.buffer.size, or druid.processing.numThreads: "
+ "maxDirectMemory[%,d], memoryNeeded[%,d] = druid.computation.buffer.size[%,d] * ( druid.processing.numThreads[%,d] + 1 )",
maxDirectMemory,
memoryNeeded,
config.intermediateComputeSizeBytes(),
config.getNumThreads()
)
);
}
} catch(UnsupportedOperationException e) {
log.info(e.getMessage());
}
catch (ClassNotFoundException e) {
log.info("No VM class, cannot do memory check.");
}
catch (NoSuchMethodException e) {
log.info("VM.maxDirectMemory doesn't exist, cannot do memory check.");
}
catch (InvocationTargetException e) {
log.warn(e, "static method shouldn't throw this");
}
catch (IllegalAccessException e) {
log.warn(e, "public method, shouldn't throw this");
catch(RuntimeException e) {
log.warn(e, e.getMessage());
}
return new IntermediateProcessingBufferPool(config.intermediateComputeSizeBytes());
......
/*
* Druid - a distributed column store.
* Copyright (C) 2012 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.server;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.api.client.repackaged.com.google.common.base.Throwables;
import com.metamx.emitter.EmittingLogger;
import com.metamx.emitter.service.ServiceEmitter;
import com.metamx.emitter.service.ServiceMetricEvent;
import com.metamx.http.client.response.ClientResponse;
import com.metamx.http.client.response.HttpResponseHandler;
import io.druid.client.RoutingDruidClient;
import io.druid.guice.annotations.Json;
import io.druid.guice.annotations.Smile;
import io.druid.query.Query;
import io.druid.server.log.RequestLogger;
import io.druid.server.router.QueryHostFinder;
import org.jboss.netty.buffer.ChannelBuffer;
import org.jboss.netty.handler.codec.http.HttpChunk;
import org.jboss.netty.handler.codec.http.HttpResponse;
import org.joda.time.DateTime;
import javax.servlet.AsyncContext;
import javax.servlet.ServletException;
import javax.servlet.annotation.WebServlet;
import javax.servlet.http.HttpServlet;
import javax.servlet.http.HttpServletRequest;
import javax.servlet.http.HttpServletResponse;
import java.io.IOException;
import java.io.OutputStream;
import java.nio.charset.Charset;
/**
*/
@WebServlet(asyncSupported = true)
public class AsyncQueryForwardingServlet extends HttpServlet
{
private static final EmittingLogger log = new EmittingLogger(AsyncQueryForwardingServlet.class);
private static final Charset UTF8 = Charset.forName("UTF-8");
private static final String DISPATCHED = "dispatched";
private final ObjectMapper jsonMapper;
private final ObjectMapper smileMapper;
private final QueryHostFinder hostFinder;
private final RoutingDruidClient routingDruidClient;
private final ServiceEmitter emitter;
private final RequestLogger requestLogger;
private final QueryIDProvider idProvider;
public AsyncQueryForwardingServlet(
@Json ObjectMapper jsonMapper,
@Smile ObjectMapper smileMapper,
QueryHostFinder hostFinder,
RoutingDruidClient routingDruidClient,
ServiceEmitter emitter,
RequestLogger requestLogger,
QueryIDProvider idProvider
)
{
this.jsonMapper = jsonMapper;
this.smileMapper = smileMapper;
this.hostFinder = hostFinder;
this.routingDruidClient = routingDruidClient;
this.emitter = emitter;
this.requestLogger = requestLogger;
this.idProvider = idProvider;
}
@Override
protected void doPost(
final HttpServletRequest req, final HttpServletResponse resp
) throws ServletException, IOException
{
final long start = System.currentTimeMillis();
Query query = null;
String queryId;
final boolean isSmile = "application/smile".equals(req.getContentType());
ObjectMapper objectMapper = isSmile ? smileMapper : jsonMapper;
OutputStream out = null;
try {
final AsyncContext ctx = req.startAsync(req, resp);
if (req.getAttribute(DISPATCHED) != null) {
return;
}
req.setAttribute(DISPATCHED, true);
resp.setStatus(200);
resp.setContentType("application/x-javascript");
query = objectMapper.readValue(req.getInputStream(), Query.class);
queryId = query.getId();
if (queryId == null) {
queryId = idProvider.next(query);
query = query.withId(queryId);
}
requestLogger.log(
new RequestLogLine(new DateTime(), req.getRemoteAddr(), query)
);
out = resp.getOutputStream();
final OutputStream outputStream = out;
final String host = hostFinder.getHost(query);
final Query theQuery = query;
final String theQueryId = queryId;
final HttpResponseHandler<OutputStream, OutputStream> responseHandler = new HttpResponseHandler<OutputStream, OutputStream>()
{
@Override
public ClientResponse<OutputStream> handleResponse(HttpResponse response)
{
byte[] bytes = getContentBytes(response.getContent());
if (bytes.length > 0) {
try {
outputStream.write(bytes);
}
catch (Exception e) {
throw Throwables.propagate(e);
}
}
return ClientResponse.finished(outputStream);
}
@Override
public ClientResponse<OutputStream> handleChunk(
ClientResponse<OutputStream> clientResponse, HttpChunk chunk
)
{
byte[] bytes = getContentBytes(chunk.getContent());
if (bytes.length > 0) {
try {
clientResponse.getObj().write(bytes);
}
catch (Exception e) {
throw Throwables.propagate(e);
}
}
return clientResponse;
}
@Override
public ClientResponse<OutputStream> done(ClientResponse<OutputStream> clientResponse)
{
final long requestTime = System.currentTimeMillis() - start;
log.info("Request time: %d", requestTime);
emitter.emit(
new ServiceMetricEvent.Builder()
.setUser2(theQuery.getDataSource().getName())
.setUser4(theQuery.getType())
.setUser5(theQuery.getIntervals().get(0).toString())
.setUser6(String.valueOf(theQuery.hasFilters()))
.setUser7(req.getRemoteAddr())
.setUser8(theQueryId)
.setUser9(theQuery.getDuration().toPeriod().toStandardMinutes().toString())
.build("request/time", requestTime)
);
final OutputStream obj = clientResponse.getObj();
try {
resp.flushBuffer();
outputStream.close();
}
catch (Exception e) {
throw Throwables.propagate(e);
}
finally {
ctx.dispatch();
}
return ClientResponse.finished(obj);
}
private byte[] getContentBytes(ChannelBuffer content)
{
byte[] contentBytes = new byte[content.readableBytes()];
content.readBytes(contentBytes);
return contentBytes;
}
};
ctx.start(
new Runnable()
{
@Override
public void run()
{
routingDruidClient.run(host, theQuery, responseHandler);
}
}
);
}
catch (Exception e) {
if (!resp.isCommitted()) {
resp.setStatus(500);
resp.resetBuffer();
if (out == null) {
out = resp.getOutputStream();
}
out.write((e.getMessage() == null) ? "Exception null".getBytes(UTF8) : e.getMessage().getBytes(UTF8));
out.write("\n".getBytes(UTF8));
}
resp.flushBuffer();
log.makeAlert(e, "Exception handling request")
.addData("query", query)
.addData("peer", req.getRemoteAddr())
.emit();
}
}
}
......@@ -31,4 +31,12 @@ public abstract class DruidProcessingConfig extends ExecutorServiceConfig
{
return 1024 * 1024 * 1024;
}
@Override @Config(value = "${base_path}.numThreads")
public int getNumThreads()
{
// default to leaving one core for background tasks
final int processors = Runtime.getRuntime().availableProcessors();
return processors > 1 ? processors - 1 : processors;
}
}
......@@ -136,8 +136,8 @@ public class QueryResource
.setUser5(query.getIntervals().get(0).toString())
.setUser6(String.valueOf(query.hasFilters()))
.setUser7(req.getRemoteAddr())
.setUser8(queryId)
.setUser9(query.getDuration().toPeriod().toStandardMinutes().toString())
.setUser10(queryId)
.build("request/time", requestTime)
);
}
......
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013, 2014 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.server;
import java.lang.reflect.InvocationTargetException;
public class VMUtils
{
public static long getMaxDirectMemory() throws UnsupportedOperationException
{
try {
Class<?> vmClass = Class.forName("sun.misc.VM");
Object maxDirectMemoryObj = vmClass.getMethod("maxDirectMemory").invoke(null);
if (maxDirectMemoryObj == null || !(maxDirectMemoryObj instanceof Number)) {
throw new UnsupportedOperationException(String.format("Cannot determine maxDirectMemory from [%s]", maxDirectMemoryObj));
} else {
return ((Number) maxDirectMemoryObj).longValue();
}
}
catch (ClassNotFoundException e) {
throw new UnsupportedOperationException("No VM class, cannot do memory check.", e);
}
catch (NoSuchMethodException e) {
throw new UnsupportedOperationException("VM.maxDirectMemory doesn't exist, cannot do memory check.", e);
}
catch (InvocationTargetException e) {
throw new RuntimeException("static method shouldn't throw this", e);
}
catch (IllegalAccessException e) {
throw new RuntimeException("public method, shouldn't throw this", e);
}
}
}
......@@ -26,12 +26,16 @@ import com.google.common.collect.Lists;
import com.google.common.collect.Maps;
import com.google.common.collect.Sets;
import com.google.inject.Inject;
import com.metamx.common.MapUtils;
import com.metamx.common.Pair;
import com.metamx.common.guava.Comparators;
import io.druid.client.DruidDataSource;
import io.druid.client.DruidServer;
import io.druid.client.InventoryView;
import io.druid.client.indexing.IndexingServiceClient;
import io.druid.db.DatabaseSegmentManager;
import io.druid.timeline.DataSegment;
import org.joda.time.DateTime;
import org.joda.time.Interval;
import javax.annotation.Nullable;
......@@ -86,7 +90,7 @@ public class DatasourcesResource
@QueryParam("simple") String simple
)
{
Response.ResponseBuilder builder = Response.status(Response.Status.OK);
Response.ResponseBuilder builder = Response.ok();
if (full != null) {
return builder.entity(getDataSources()).build();
} else if (simple != null) {
......@@ -128,15 +132,71 @@ public class DatasourcesResource
@Path("/{dataSourceName}")
@Produces("application/json")
public Response getTheDataSource(
@PathParam("dataSourceName") final String dataSourceName
@PathParam("dataSourceName") final String dataSourceName,
@QueryParam("full") final String full
)
{
DruidDataSource dataSource = getDataSource(dataSourceName.toLowerCase());
if (dataSource == null) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
if (full != null) {
return Response.ok(dataSource).build();
}
Map<String, Object> tiers = Maps.newHashMap();
Map<String, Object> segments = Maps.newHashMap();
Map<String, Map<String, Object>> retVal = ImmutableMap.of(
"tiers", tiers,
"segments", segments
);
int totalSegmentCount = 0;
long totalSegmentSize = 0;
long minTime = Long.MAX_VALUE;
long maxTime = Long.MIN_VALUE;
for (DruidServer druidServer : serverInventoryView.getInventory()) {
DruidDataSource druidDataSource = druidServer.getDataSource(dataSourceName);
if (druidDataSource == null) {
continue;
}
long dataSourceSegmentSize = 0;
for (DataSegment dataSegment : druidDataSource.getSegments()) {
dataSourceSegmentSize += dataSegment.getSize();
if (dataSegment.getInterval().getStartMillis() < minTime) {
minTime = dataSegment.getInterval().getStartMillis();
}
if (dataSegment.getInterval().getEndMillis() > maxTime) {
maxTime = dataSegment.getInterval().getEndMillis();
}
}
// segment stats
totalSegmentCount += druidDataSource.getSegments().size();
totalSegmentSize += dataSourceSegmentSize;
// tier stats
Map<String, Object> tierStats = (Map) tiers.get(druidServer.getTier());
if (tierStats == null) {
tierStats = Maps.newHashMap();
tiers.put(druidServer.getTier(), tierStats);
}
int segmentCount = MapUtils.getInt(tierStats, "segmentCount", 0);
tierStats.put("segmentCount", segmentCount + druidDataSource.getSegments().size());
long segmentSize = MapUtils.getLong(tierStats, "size", 0L);
tierStats.put("size", segmentSize + dataSourceSegmentSize);
}
return Response.ok(dataSource).build();
segments.put("count", totalSegmentCount);
segments.put("size", totalSegmentSize);
segments.put("minTime", new DateTime(minTime));
segments.put("maxTime", new DateTime(maxTime));
return Response.ok(retVal).build();
}
@POST
......@@ -147,10 +207,10 @@ public class DatasourcesResource
)
{
if (!databaseSegmentManager.enableDatasource(dataSourceName)) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
return Response.status(Response.Status.OK).build();
return Response.ok().build();
}
@DELETE
......@@ -163,29 +223,155 @@ public class DatasourcesResource
)
{
if (indexingServiceClient == null) {
return Response.ok().entity(ImmutableMap.of("error", "no indexing service found")).build();
return Response.ok(ImmutableMap.of("error", "no indexing service found")).build();
}
if (kill != null && Boolean.valueOf(kill)) {
try {
indexingServiceClient.killSegments(dataSourceName, new Interval(interval));
}
catch (Exception e) {
return Response.status(Response.Status.NOT_FOUND)
.entity(
ImmutableMap.of(
"error",
"Exception occurred. Are you sure you have an indexing service?"
)
)
return Response.serverError().entity(
ImmutableMap.of(
"error",
"Exception occurred. Are you sure you have an indexing service?"
)
)
.build();
}
} else {
if (!databaseSegmentManager.removeDatasource(dataSourceName)) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
}
return Response.status(Response.Status.OK).build();
return Response.ok().build();
}
@GET
@Path("/{dataSourceName}/intervals")
@Produces("application/json")
public Response getSegmentDataSourceIntervals(
@PathParam("dataSourceName") String dataSourceName,
@QueryParam("simple") String simple,
@QueryParam("full") String full
)
{
final DruidDataSource dataSource = getDataSource(dataSourceName.toLowerCase());
if (dataSource == null) {
return Response.noContent().build();
}
final Comparator<Interval> comparator = Comparators.inverse(Comparators.intervalsByStartThenEnd());
if (full != null) {
final Map<Interval, Map<String, Object>> retVal = Maps.newTreeMap(comparator);
for (DataSegment dataSegment : dataSource.getSegments()) {
Map<String, Object> segments = retVal.get(dataSegment.getInterval());
if (segments == null) {
segments = Maps.newHashMap();
retVal.put(dataSegment.getInterval(), segments);
}
Pair<DataSegment, Set<String>> val = getSegment(dataSegment.getIdentifier());
segments.put(dataSegment.getIdentifier(), ImmutableMap.of("metadata", val.lhs, "servers", val.rhs));
}
return Response.ok(retVal).build();
}
if (simple != null) {
final Map<Interval, Map<String, Object>> retVal = Maps.newHashMap();
for (DataSegment dataSegment : dataSource.getSegments()) {
Map<String, Object> properties = retVal.get(dataSegment.getInterval());
if (properties == null) {
properties = Maps.newHashMap();
properties.put("size", dataSegment.getSize());
properties.put("count", 1);
retVal.put(dataSegment.getInterval(), properties);
} else {
properties.put("size", MapUtils.getLong(properties, "size", 0L) + dataSegment.getSize());
properties.put("count", MapUtils.getInt(properties, "count", 0) + 1);
}
}
return Response.ok(retVal).build();
}
final Set<Interval> intervals = Sets.newTreeSet(comparator);
for (DataSegment dataSegment : dataSource.getSegments()) {
intervals.add(dataSegment.getInterval());
}
return Response.ok(intervals).build();
}
@GET
@Path("/{dataSourceName}/intervals/{interval}")
@Produces("application/json")
public Response getSegmentDataSourceSpecificInterval(
@PathParam("dataSourceName") String dataSourceName,
@PathParam("interval") String interval,
@QueryParam("simple") String simple,
@QueryParam("full") String full
)
{
final DruidDataSource dataSource = getDataSource(dataSourceName.toLowerCase());
final Interval theInterval = new Interval(interval.replace("_", "/"));
if (dataSource == null || interval == null) {
return Response.noContent().build();
}
final Comparator<Interval> comparator = Comparators.inverse(Comparators.intervalsByStartThenEnd());
if (full != null) {
final Map<Interval, Map<String, Object>> retVal = Maps.newTreeMap(comparator);
for (DataSegment dataSegment : dataSource.getSegments()) {
if (theInterval.contains(dataSegment.getInterval())) {
Map<String, Object> segments = retVal.get(dataSegment.getInterval());
if (segments == null) {
segments = Maps.newHashMap();
retVal.put(dataSegment.getInterval(), segments);
}
Pair<DataSegment, Set<String>> val = getSegment(dataSegment.getIdentifier());
segments.put(dataSegment.getIdentifier(), ImmutableMap.of("metadata", val.lhs, "servers", val.rhs));
}
}
return Response.ok(retVal).build();
}
if (simple != null) {
final Map<Interval, Map<String, Object>> retVal = Maps.newHashMap();
for (DataSegment dataSegment : dataSource.getSegments()) {
if (theInterval.contains(dataSegment.getInterval())) {
Map<String, Object> properties = retVal.get(dataSegment.getInterval());
if (properties == null) {
properties = Maps.newHashMap();
properties.put("size", dataSegment.getSize());
properties.put("count", 1);
retVal.put(dataSegment.getInterval(), properties);
} else {
properties.put("size", MapUtils.getLong(properties, "size", 0L) + dataSegment.getSize());
properties.put("count", MapUtils.getInt(properties, "count", 0) + 1);
}
}
}
return Response.ok(retVal).build();
}
final Set<String> retVal = Sets.newTreeSet(Comparators.inverse(String.CASE_INSENSITIVE_ORDER));
for (DataSegment dataSegment : dataSource.getSegments()) {
if (theInterval.contains(dataSegment.getInterval())) {
retVal.add(dataSegment.getIdentifier());
}
}
return Response.ok(retVal).build();
}
@GET
......@@ -198,10 +384,10 @@ public class DatasourcesResource
{
DruidDataSource dataSource = getDataSource(dataSourceName.toLowerCase());
if (dataSource == null) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
Response.ResponseBuilder builder = Response.status(Response.Status.OK);
Response.ResponseBuilder builder = Response.ok();
if (full != null) {
return builder.entity(dataSource.getSegments()).build();
}
......@@ -212,7 +398,7 @@ public class DatasourcesResource
new Function<DataSegment, Object>()
{
@Override
public Object apply(@Nullable DataSegment segment)
public Object apply(DataSegment segment)
{
return segment.getIdentifier();
}
......@@ -231,15 +417,18 @@ public class DatasourcesResource
{
DruidDataSource dataSource = getDataSource(dataSourceName.toLowerCase());
if (dataSource == null) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
for (DataSegment segment : dataSource.getSegments()) {
if (segment.getIdentifier().equalsIgnoreCase(segmentId)) {
return Response.status(Response.Status.OK).entity(segment).build();
}
Pair<DataSegment, Set<String>> retVal = getSegment(segmentId);
if (retVal != null) {
return Response.ok(
ImmutableMap.of("metadata", retVal.lhs, "servers", retVal.rhs)
).build();
}
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
@DELETE
......@@ -250,10 +439,10 @@ public class DatasourcesResource
)
{
if (!databaseSegmentManager.removeSegment(dataSourceName, segmentId)) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
return Response.status(Response.Status.OK).build();
return Response.ok().build();
}
@POST
......@@ -265,10 +454,27 @@ public class DatasourcesResource
)
{
if (!databaseSegmentManager.enableSegment(segmentId)) {
return Response.status(Response.Status.NOT_FOUND).build();
return Response.noContent().build();
}
return Response.status(Response.Status.OK).build();
return Response.ok().build();
}
@GET
@Path("/{dataSourceName}/tiers")
@Produces("application/json")
public Response getSegmentDataSourceTiers(
@PathParam("dataSourceName") String dataSourceName
)
{
Set<String> retVal = Sets.newHashSet();
for (DruidServer druidServer : serverInventoryView.getInventory()) {
if (druidServer.getDataSource(dataSourceName) != null) {
retVal.add(druidServer.getTier());
}
}
return Response.ok(retVal).build();
}
private DruidDataSource getDataSource(final String dataSourceName)
......@@ -345,4 +551,23 @@ public class DatasourcesResource
);
return dataSources;
}
private Pair<DataSegment, Set<String>> getSegment(String segmentId)
{
DataSegment theSegment = null;
Set<String> servers = Sets.newHashSet();
for (DruidServer druidServer : serverInventoryView.getInventory()) {
DataSegment currSegment = druidServer.getSegments().get(segmentId);
if (currSegment != null) {
theSegment = currSegment;
servers.add(druidServer.getHost());
}
}
if (theSegment == null) {
return null;
}
return new Pair<>(theSegment, servers);
}
}
......@@ -48,6 +48,8 @@ public class ServersResource
return new ImmutableMap.Builder<String, Object>()
.put("host", input.getHost())
.put("tier", input.getTier())
.put("type", input.getType())
.put("priority", input.getPriority())
.put("currSize", input.getCurrSize())
.put("maxSize", input.getMaxSize())
.build();
......
......@@ -19,18 +19,24 @@
package io.druid.server.http;
import com.google.api.client.util.Lists;
import com.google.api.client.util.Maps;
import com.google.common.base.Function;
import com.google.common.collect.HashBasedTable;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Iterables;
import com.google.common.collect.Sets;
import com.google.common.collect.Table;
import com.google.inject.Inject;
import com.metamx.common.MapUtils;
import io.druid.client.DruidDataSource;
import io.druid.client.DruidServer;
import io.druid.client.InventoryView;
import io.druid.timeline.DataSegment;
import org.joda.time.Interval;
import javax.ws.rs.GET;
import javax.ws.rs.Path;
import javax.ws.rs.PathParam;
import javax.ws.rs.Produces;
import javax.ws.rs.QueryParam;
import javax.ws.rs.core.Response;
......@@ -86,4 +92,55 @@ public class TiersResource
return builder.entity(tiers).build();
}
@GET
@Path("/{tierName}")
@Produces("application/json")
public Response getTierDatasources(
@PathParam("tierName") String tierName,
@QueryParam("simple") String simple
)
{
if (simple != null) {
Table<String, Interval, Map<String, Object>> retVal = HashBasedTable.create();
for (DruidServer druidServer : serverInventoryView.getInventory()) {
if (druidServer.getTier().equalsIgnoreCase(tierName)) {
for (DataSegment dataSegment : druidServer.getSegments().values()) {
Map<String, Object> properties = retVal.get(dataSegment.getDataSource(), dataSegment.getInterval());
if (properties == null) {
properties = Maps.newHashMap();
retVal.put(dataSegment.getDataSource(), dataSegment.getInterval(), properties);
}
properties.put("size", MapUtils.getLong(properties, "size", 0L) + dataSegment.getSize());
properties.put("count", MapUtils.getInt(properties, "count", 0) + 1);
}
}
}
return Response.ok(retVal.rowMap()).build();
}
Set<String> retVal = Sets.newHashSet();
for (DruidServer druidServer : serverInventoryView.getInventory()) {
if (druidServer.getTier().equalsIgnoreCase(tierName)) {
retVal.addAll(
Lists.newArrayList(
Iterables.transform(
druidServer.getDataSources(),
new Function<DruidDataSource, String>()
{
@Override
public String apply(DruidDataSource input)
{
return input.getName();
}
}
)
)
);
}
}
return Response.ok(retVal).build();
}
}
......@@ -49,7 +49,7 @@ import io.druid.server.DruidNode;
import io.druid.server.StatusResource;
import org.eclipse.jetty.server.Connector;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.nio.SelectChannelConnector;
import org.eclipse.jetty.server.ServerConnector;
import org.eclipse.jetty.util.thread.QueuedThreadPool;
import javax.servlet.ServletException;
......@@ -154,13 +154,11 @@ public class JettyServerModule extends JerseyServletModule
threadPool.setMinThreads(config.getNumThreads());
threadPool.setMaxThreads(config.getNumThreads());
final Server server = new Server();
server.setThreadPool(threadPool);
final Server server = new Server(threadPool);
SelectChannelConnector connector = new SelectChannelConnector();
ServerConnector connector = new ServerConnector(server);
connector.setPort(node.getPort());
connector.setMaxIdleTime(Ints.checkedCast(config.getMaxIdleTime().toStandardDuration().getMillis()));
connector.setStatsOn(true);
connector.setIdleTimeout(Ints.checkedCast(config.getMaxIdleTime().toStandardDuration().getMillis()));
server.setConnectors(new Connector[]{connector});
......
......@@ -31,7 +31,8 @@ public class ServerConfig
{
@JsonProperty
@Min(1)
private int numThreads = Math.max(10, Runtime.getRuntime().availableProcessors() + 1);
// Jetty defaults are whack
private int numThreads = Math.max(10, (Runtime.getRuntime().availableProcessors() * 17) / 16 + 2);
@JsonProperty
@NotNull
......
......@@ -57,7 +57,7 @@ public class CoordinatorRuleManager
private final HttpClient httpClient;
private final ObjectMapper jsonMapper;
private final Supplier<TierConfig> config;
private final Supplier<TieredBrokerConfig> config;
private final ServerDiscoverySelector selector;
private final StatusResponseHandler responseHandler;
......@@ -73,7 +73,7 @@ public class CoordinatorRuleManager
public CoordinatorRuleManager(
@Global HttpClient httpClient,
@Json ObjectMapper jsonMapper,
Supplier<TierConfig> config,
Supplier<TieredBrokerConfig> config,
ServerDiscoverySelector selector
)
{
......
......@@ -19,86 +19,67 @@
package io.druid.server.router;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.common.base.Throwables;
import com.google.inject.Inject;
import com.metamx.common.Pair;
import com.metamx.common.guava.Sequence;
import com.metamx.common.guava.Sequences;
import com.metamx.emitter.EmittingLogger;
import com.metamx.http.client.HttpClient;
import io.druid.client.DirectDruidClient;
import io.druid.client.selector.Server;
import io.druid.curator.discovery.ServerDiscoveryFactory;
import io.druid.curator.discovery.ServerDiscoverySelector;
import io.druid.query.Query;
import io.druid.query.QueryRunner;
import io.druid.query.QueryToolChestWarehouse;
import java.util.Map;
import java.util.concurrent.ConcurrentHashMap;
/**
*/
public class TierAwareQueryRunner<T> implements QueryRunner<T>
public class QueryHostFinder<T>
{
private static EmittingLogger log = new EmittingLogger(TierAwareQueryRunner.class);
private static EmittingLogger log = new EmittingLogger(QueryHostFinder.class);
private final QueryToolChestWarehouse warehouse;
private final ObjectMapper objectMapper;
private final HttpClient httpClient;
private final BrokerSelector<T> brokerSelector;
private final TierConfig tierConfig;
private final TieredBrokerHostSelector hostSelector;
private final ConcurrentHashMap<String, Server> serverBackup = new ConcurrentHashMap<String, Server>();
public TierAwareQueryRunner(
QueryToolChestWarehouse warehouse,
ObjectMapper objectMapper,
HttpClient httpClient,
BrokerSelector<T> brokerSelector,
TierConfig tierConfig
@Inject
public QueryHostFinder(
TieredBrokerHostSelector hostSelector
)
{
this.warehouse = warehouse;
this.objectMapper = objectMapper;
this.httpClient = httpClient;
this.brokerSelector = brokerSelector;
this.tierConfig = tierConfig;
this.hostSelector = hostSelector;
}
public Server findServer(Query<T> query)
{
final Pair<String, ServerDiscoverySelector> selected = brokerSelector.select(query);
final String brokerServiceName = selected.lhs;
final Pair<String, ServerDiscoverySelector> selected = hostSelector.select(query);
final String serviceName = selected.lhs;
final ServerDiscoverySelector selector = selected.rhs;
Server server = selector.pick();
if (server == null) {
log.error(
"WTF?! No server found for brokerServiceName[%s]. Using backup",
brokerServiceName
"WTF?! No server found for serviceName[%s]. Using backup",
serviceName
);
server = serverBackup.get(brokerServiceName);
server = serverBackup.get(serviceName);
if (server == null) {
log.makeAlert(
"WTF?! No backup found for brokerServiceName[%s]. Using default[%s]",
brokerServiceName,
tierConfig.getDefaultBrokerServiceName()
).emit();
log.error(
"WTF?! No backup found for serviceName[%s]. Using default[%s]",
serviceName,
hostSelector.getDefaultServiceName()
);
server = serverBackup.get(tierConfig.getDefaultBrokerServiceName());
server = serverBackup.get(hostSelector.getDefaultServiceName());
}
} else {
serverBackup.put(brokerServiceName, server);
}
if (server != null) {
serverBackup.put(serviceName, server);
}
return server;
}
@Override
public Sequence<T> run(Query<T> query)
public String getHost(Query<T> query)
{
Server server = findServer(query);
......@@ -106,16 +87,12 @@ public class TierAwareQueryRunner<T> implements QueryRunner<T>
log.makeAlert(
"Catastrophic failure! No servers found at all! Failing request!"
).emit();
return Sequences.empty();
return null;
}
QueryRunner<T> client = new DirectDruidClient<T>(
warehouse,
objectMapper,
httpClient,
server.getHost()
);
log.debug("Selected [%s]", server.getHost());
return client.run(query);
return server.getHost();
}
}
......@@ -29,7 +29,7 @@ import java.util.LinkedHashMap;
/**
*/
public class TierConfig
public class TieredBrokerConfig
{
@JsonProperty
@NotNull
......
......@@ -20,20 +20,20 @@
package io.druid.server.router;
import com.google.common.base.Throwables;
import com.google.common.collect.Iterables;
import com.google.inject.Inject;
import com.metamx.common.Pair;
import com.metamx.common.concurrent.ScheduledExecutors;
import com.metamx.common.lifecycle.LifecycleStart;
import com.metamx.common.lifecycle.LifecycleStop;
import com.metamx.emitter.EmittingLogger;
import io.druid.concurrent.Execs;
import io.druid.client.selector.HostSelector;
import io.druid.curator.discovery.ServerDiscoveryFactory;
import io.druid.curator.discovery.ServerDiscoverySelector;
import io.druid.query.Query;
import io.druid.query.timeboundary.TimeBoundaryQuery;
import io.druid.server.coordinator.rules.LoadRule;
import io.druid.server.coordinator.rules.Rule;
import org.joda.time.DateTime;
import org.joda.time.Duration;
import org.joda.time.Interval;
import java.util.List;
......@@ -42,23 +42,23 @@ import java.util.concurrent.ConcurrentHashMap;
/**
*/
public class BrokerSelector<T>
public class TieredBrokerHostSelector<T> implements HostSelector<T>
{
private static EmittingLogger log = new EmittingLogger(BrokerSelector.class);
private static EmittingLogger log = new EmittingLogger(TieredBrokerHostSelector.class);
private final CoordinatorRuleManager ruleManager;
private final TierConfig tierConfig;
private final TieredBrokerConfig tierConfig;
private final ServerDiscoveryFactory serverDiscoveryFactory;
private final ConcurrentHashMap<String, ServerDiscoverySelector> selectorMap = new ConcurrentHashMap<String, ServerDiscoverySelector>();
private final ConcurrentHashMap<String, ServerDiscoverySelector> selectorMap = new ConcurrentHashMap<>();
private final Object lock = new Object();
private volatile boolean started = false;
@Inject
public BrokerSelector(
public TieredBrokerHostSelector(
CoordinatorRuleManager ruleManager,
TierConfig tierConfig,
TieredBrokerConfig tierConfig,
ServerDiscoveryFactory serverDiscoveryFactory
)
{
......@@ -112,6 +112,12 @@ public class BrokerSelector<T>
}
}
@Override
public String getDefaultServiceName()
{
return tierConfig.getDefaultBrokerServiceName();
}
public Pair<String, ServerDiscoverySelector> select(final Query<T> query)
{
synchronized (lock) {
......@@ -120,35 +126,46 @@ public class BrokerSelector<T>
}
}
List<Rule> rules = ruleManager.getRulesWithDefault((query.getDataSource()).getName());
String brokerServiceName = null;
// find the rule that can apply to the entire set of intervals
DateTime now = new DateTime();
int lastRulePosition = -1;
LoadRule baseRule = null;
// Somewhat janky way of always selecting highest priority broker for this type of query
if (query instanceof TimeBoundaryQuery) {
brokerServiceName = Iterables.getFirst(
tierConfig.getTierToBrokerMap().values(),
tierConfig.getDefaultBrokerServiceName()
);
}
for (Interval interval : query.getIntervals()) {
int currRulePosition = 0;
for (Rule rule : rules) {
if (rule instanceof LoadRule && currRulePosition > lastRulePosition && rule.appliesTo(interval, now)) {
lastRulePosition = currRulePosition;
baseRule = (LoadRule) rule;
break;
if (brokerServiceName == null) {
List<Rule> rules = ruleManager.getRulesWithDefault((query.getDataSource()).getName());
// find the rule that can apply to the entire set of intervals
DateTime now = new DateTime();
int lastRulePosition = -1;
LoadRule baseRule = null;
for (Interval interval : query.getIntervals()) {
int currRulePosition = 0;
for (Rule rule : rules) {
if (rule instanceof LoadRule && currRulePosition > lastRulePosition && rule.appliesTo(interval, now)) {
lastRulePosition = currRulePosition;
baseRule = (LoadRule) rule;
break;
}
currRulePosition++;
}
currRulePosition++;
}
}
if (baseRule == null) {
return null;
}
if (baseRule == null) {
return null;
}
// in the baseRule, find the broker of highest priority
String brokerServiceName = null;
for (Map.Entry<String, String> entry : tierConfig.getTierToBrokerMap().entrySet()) {
if (baseRule.getTieredReplicants().containsKey(entry.getKey())) {
brokerServiceName = entry.getValue();
break;
// in the baseRule, find the broker of highest priority
for (Map.Entry<String, String> entry : tierConfig.getTierToBrokerMap().entrySet()) {
if (baseRule.getTieredReplicants().containsKey(entry.getKey())) {
brokerServiceName = entry.getValue();
break;
}
}
}
......
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013, 2014 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.client;
import com.google.common.collect.ImmutableMap;
import com.google.common.collect.Lists;
import com.metamx.common.ISE;
import com.metamx.common.guava.ResourceClosingSequence;
import com.metamx.common.guava.Sequence;
import com.metamx.common.guava.Sequences;
import com.metamx.common.guava.Yielder;
import com.metamx.common.guava.YieldingAccumulator;
import io.druid.client.cache.Cache;
import io.druid.client.cache.CacheConfig;
import io.druid.granularity.AllGranularity;
import io.druid.jackson.DefaultObjectMapper;
import io.druid.query.Query;
import io.druid.query.QueryRunner;
import io.druid.query.Result;
import io.druid.query.SegmentDescriptor;
import io.druid.query.aggregation.AggregatorFactory;
import io.druid.query.aggregation.CountAggregatorFactory;
import io.druid.query.aggregation.LongSumAggregatorFactory;
import io.druid.query.topn.TopNQueryBuilder;
import io.druid.query.topn.TopNQueryConfig;
import io.druid.query.topn.TopNQueryQueryToolChest;
import io.druid.query.topn.TopNResultValue;
import org.easymock.EasyMock;
import org.joda.time.DateTime;
import org.joda.time.Interval;
import org.junit.Assert;
import org.junit.Test;
import java.io.Closeable;
import java.io.IOException;
import java.util.ArrayList;
import java.util.Arrays;
import java.util.List;
import java.util.Map;
import java.util.concurrent.atomic.AtomicBoolean;
public class CachePopulatingQueryRunnerTest
{
private static final List<AggregatorFactory> AGGS = Arrays.asList(
new CountAggregatorFactory("rows"),
new LongSumAggregatorFactory("imps", "imps"),
new LongSumAggregatorFactory("impers", "imps")
);
@Test
public void testCachePopulatingQueryRunnerResourceClosing() throws Exception
{
Iterable<Result<TopNResultValue>> expectedRes = makeTopNResults(
new DateTime("2011-01-05"), "a", 50, 4994, "b", 50, 4993, "c", 50, 4992,
new DateTime("2011-01-06"), "a", 50, 4991, "b", 50, 4990, "c", 50, 4989,
new DateTime("2011-01-07"), "a", 50, 4991, "b", 50, 4990, "c", 50, 4989,
new DateTime("2011-01-08"), "a", 50, 4988, "b", 50, 4987, "c", 50, 4986,
new DateTime("2011-01-09"), "a", 50, 4985, "b", 50, 4984, "c", 50, 4983
);
final TopNQueryBuilder builder = new TopNQueryBuilder()
.dataSource("ds")
.dimension("top_dim")
.metric("imps")
.threshold(3)
.intervals("2011-01-05/2011-01-10")
.aggregators(AGGS)
.granularity(AllGranularity.ALL);
final AssertingClosable closable = new AssertingClosable();
final Sequence resultSeq = new ResourceClosingSequence(
Sequences.simple(expectedRes), closable
)
{
@Override
public Yielder toYielder(Object initValue, YieldingAccumulator accumulator)
{
Assert.assertFalse(closable.isClosed());
return super.toYielder(
initValue,
accumulator
);
}
};
Cache cache = EasyMock.createMock(Cache.class);
// cache populater ignores populating for local cache, so a dummy cache
EasyMock.expect(cache.isLocal()).andReturn(false);
CachePopulatingQueryRunner runner = new CachePopulatingQueryRunner(
"segment",
new SegmentDescriptor(new Interval("2011/2012"), "version", 0),
new DefaultObjectMapper(),
cache,
new TopNQueryQueryToolChest(new TopNQueryConfig()),
new QueryRunner()
{
@Override
public Sequence run(Query query)
{
return resultSeq;
}
},
new CacheConfig()
);
Sequence res = runner.run(builder.build());
// base sequence is not closed yet
Assert.assertTrue(closable.isClosed());
ArrayList results = Sequences.toList(res, new ArrayList());
Assert.assertTrue(closable.isClosed());
Assert.assertEquals(expectedRes, results);
}
private Iterable<Result<TopNResultValue>> makeTopNResults
(Object... objects)
{
List<Result<TopNResultValue>> retVal = Lists.newArrayList();
int index = 0;
while (index < objects.length) {
DateTime timestamp = (DateTime) objects[index++];
List<Map<String, Object>> values = Lists.newArrayList();
while (index < objects.length && !(objects[index] instanceof DateTime)) {
if (objects.length - index < 3) {
throw new ISE(
"expect 3 values for each entry in the top list, had %d values left.", objects.length - index
);
}
final double imps = ((Number) objects[index + 2]).doubleValue();
final double rows = ((Number) objects[index + 1]).doubleValue();
values.add(
ImmutableMap.of(
"top_dim", objects[index],
"rows", rows,
"imps", imps,
"impers", imps,
"avg_imps_per_row", imps / rows
)
);
index += 3;
}
retVal.add(new Result<>(timestamp, new TopNResultValue(values)));
}
return retVal;
}
private static class AssertingClosable implements Closeable
{
private final AtomicBoolean closed = new AtomicBoolean(false);
@Override
public void close() throws IOException
{
Assert.assertFalse(closed.get());
Assert.assertTrue(closed.compareAndSet(false, true));
}
public boolean isClosed()
{
return closed.get();
}
}
}
......@@ -40,20 +40,20 @@ import java.util.LinkedHashMap;
/**
*/
public class TierAwareQueryRunnerTest
public class QueryHostFinderTest
{
private ServerDiscoverySelector selector;
private BrokerSelector brokerSelector;
private TierConfig config;
private TieredBrokerHostSelector brokerSelector;
private TieredBrokerConfig config;
private Server server;
@Before
public void setUp() throws Exception
{
selector = EasyMock.createMock(ServerDiscoverySelector.class);
brokerSelector = EasyMock.createMock(BrokerSelector.class);
brokerSelector = EasyMock.createMock(TieredBrokerHostSelector.class);
config = new TierConfig()
config = new TieredBrokerConfig()
{
@Override
public LinkedHashMap<String, String> getTierToBrokerMap()
......@@ -118,12 +118,8 @@ public class TierAwareQueryRunnerTest
EasyMock.expect(selector.pick()).andReturn(server).once();
EasyMock.replay(selector);
TierAwareQueryRunner queryRunner = new TierAwareQueryRunner(
null,
null,
null,
brokerSelector,
config
QueryHostFinder queryRunner = new QueryHostFinder(
brokerSelector
);
Server server = queryRunner.findServer(
......
......@@ -50,11 +50,11 @@ import java.util.List;
/**
*/
public class BrokerSelectorTest
public class TieredBrokerHostSelectorTest
{
private ServerDiscoveryFactory factory;
private ServerDiscoverySelector selector;
private BrokerSelector brokerSelector;
private TieredBrokerHostSelector brokerSelector;
@Before
public void setUp() throws Exception
......@@ -62,9 +62,9 @@ public class BrokerSelectorTest
factory = EasyMock.createMock(ServerDiscoveryFactory.class);
selector = EasyMock.createMock(ServerDiscoverySelector.class);
brokerSelector = new BrokerSelector(
brokerSelector = new TieredBrokerHostSelector(
new TestRuleManager(null, null, null, null),
new TierConfig()
new TieredBrokerConfig()
{
@Override
public LinkedHashMap<String, String> getTierToBrokerMap()
......@@ -112,11 +112,12 @@ public class BrokerSelectorTest
public void testBasicSelect() throws Exception
{
String brokerName = (String) brokerSelector.select(
new TimeBoundaryQuery(
new TableDataSource("test"),
new MultipleIntervalSegmentSpec(Arrays.<Interval>asList(new Interval("2011-08-31/2011-09-01"))),
null
)
Druids.newTimeseriesQueryBuilder()
.dataSource("test")
.granularity("all")
.aggregators(Arrays.<AggregatorFactory>asList(new CountAggregatorFactory("rows")))
.intervals(Arrays.<Interval>asList(new Interval("2011-08-31/2011-09-01")))
.build()
).lhs;
Assert.assertEquals("coldBroker", brokerName);
......@@ -127,11 +128,12 @@ public class BrokerSelectorTest
public void testBasicSelect2() throws Exception
{
String brokerName = (String) brokerSelector.select(
new TimeBoundaryQuery(
new TableDataSource("test"),
new MultipleIntervalSegmentSpec(Arrays.<Interval>asList(new Interval("2013-08-31/2013-09-01"))),
null
)
Druids.newTimeseriesQueryBuilder()
.dataSource("test")
.granularity("all")
.aggregators(Arrays.<AggregatorFactory>asList(new CountAggregatorFactory("rows")))
.intervals(Arrays.<Interval>asList(new Interval("2013-08-31/2013-09-01")))
.build()
).lhs;
Assert.assertEquals("hotBroker", brokerName);
......@@ -141,11 +143,12 @@ public class BrokerSelectorTest
public void testSelectMatchesNothing() throws Exception
{
Pair retVal = brokerSelector.select(
new TimeBoundaryQuery(
new TableDataSource("test"),
new MultipleIntervalSegmentSpec(Arrays.<Interval>asList(new Interval("2010-08-31/2010-09-01"))),
null
)
Druids.newTimeseriesQueryBuilder()
.dataSource("test")
.granularity("all")
.aggregators(Arrays.<AggregatorFactory>asList(new CountAggregatorFactory("rows")))
.intervals(Arrays.<Interval>asList(new Interval("2010-08-31/2010-09-01")))
.build()
);
Assert.assertEquals(null, retVal);
......@@ -199,7 +202,7 @@ public class BrokerSelectorTest
public TestRuleManager(
@Global HttpClient httpClient,
@Json ObjectMapper jsonMapper,
Supplier<TierConfig> config,
Supplier<TieredBrokerConfig> config,
ServerDiscoverySelector selector
)
{
......
......@@ -27,7 +27,7 @@
<parent>
<groupId>io.druid</groupId>
<artifactId>druid</artifactId>
<version>0.6.75-SNAPSHOT</version>
<version>0.6.83-SNAPSHOT</version>
</parent>
<dependencies>
......
......@@ -55,7 +55,7 @@ import java.util.List;
*/
@Command(
name = "broker",
description = "Runs a broker node, see http://druid.io/docs/0.6.73/Broker.html for a description"
description = "Runs a broker node, see http://druid.io/docs/0.6.81/Broker.html for a description"
)
public class CliBroker extends ServerRunnable
{
......
......@@ -66,7 +66,7 @@ import java.util.List;
*/
@Command(
name = "coordinator",
description = "Runs the Coordinator, see http://druid.io/docs/0.6.73/Coordinator.html for a description."
description = "Runs the Coordinator, see http://druid.io/docs/0.6.81/Coordinator.html for a description."
)
public class CliCoordinator extends ServerRunnable
{
......
......@@ -41,7 +41,7 @@ import java.util.List;
*/
@Command(
name = "hadoop",
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.73/Batch-ingestion.html for a description."
description = "Runs the batch Hadoop Druid Indexer, see http://druid.io/docs/0.6.81/Batch-ingestion.html for a description."
)
public class CliHadoopIndexer implements Runnable
{
......
......@@ -46,7 +46,7 @@ import java.util.List;
*/
@Command(
name = "historical",
description = "Runs a Historical node, see http://druid.io/docs/0.6.73/Historical.html for a description"
description = "Runs a Historical node, see http://druid.io/docs/0.6.81/Historical.html for a description"
)
public class CliHistorical extends ServerRunnable
{
......
......@@ -93,7 +93,7 @@ import java.util.List;
*/
@Command(
name = "overlord",
description = "Runs an Overlord node, see http://druid.io/docs/0.6.73/Indexing-Service.html for a description"
description = "Runs an Overlord node, see http://druid.io/docs/0.6.81/Indexing-Service.html for a description"
)
public class CliOverlord extends ServerRunnable
{
......
......@@ -30,7 +30,7 @@ import java.util.List;
*/
@Command(
name = "realtime",
description = "Runs a realtime node, see http://druid.io/docs/0.6.73/Realtime.html for a description"
description = "Runs a realtime node, see http://druid.io/docs/0.6.81/Realtime.html for a description"
)
public class CliRealtime extends ServerRunnable
{
......
......@@ -42,7 +42,7 @@ import java.util.concurrent.Executor;
*/
@Command(
name = "realtime",
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/0.6.73/Realtime.html for a description"
description = "Runs a standalone realtime node for examples, see http://druid.io/docs/latest/Realtime.html for a description"
)
public class CliRealtimeExample extends ServerRunnable
{
......
......@@ -25,24 +25,20 @@ import com.google.inject.Module;
import com.google.inject.Provides;
import com.metamx.common.logger.Logger;
import io.airlift.command.Command;
import io.druid.client.RoutingDruidClient;
import io.druid.curator.discovery.DiscoveryModule;
import io.druid.curator.discovery.ServerDiscoveryFactory;
import io.druid.curator.discovery.ServerDiscoverySelector;
import io.druid.guice.Jerseys;
import io.druid.guice.JsonConfigProvider;
import io.druid.guice.LazySingleton;
import io.druid.guice.LifecycleModule;
import io.druid.guice.ManageLifecycle;
import io.druid.guice.annotations.Self;
import io.druid.query.MapQueryToolChestWarehouse;
import io.druid.query.QuerySegmentWalker;
import io.druid.query.QueryToolChestWarehouse;
import io.druid.server.QueryResource;
import io.druid.server.initialization.JettyServerInitializer;
import io.druid.server.router.BrokerSelector;
import io.druid.server.router.CoordinatorRuleManager;
import io.druid.server.router.RouterQuerySegmentWalker;
import io.druid.server.router.TierConfig;
import io.druid.server.router.QueryHostFinder;
import io.druid.server.router.TieredBrokerConfig;
import io.druid.server.router.TieredBrokerHostSelector;
import org.eclipse.jetty.server.Server;
import java.util.List;
......@@ -71,19 +67,16 @@ public class CliRouter extends ServerRunnable
@Override
public void configure(Binder binder)
{
JsonConfigProvider.bind(binder, "druid.router", TierConfig.class);
JsonConfigProvider.bind(binder, "druid.router", TieredBrokerConfig.class);
binder.bind(CoordinatorRuleManager.class);
LifecycleModule.register(binder, CoordinatorRuleManager.class);
binder.bind(QueryToolChestWarehouse.class).to(MapQueryToolChestWarehouse.class);
binder.bind(TieredBrokerHostSelector.class).in(ManageLifecycle.class);
binder.bind(QueryHostFinder.class).in(LazySingleton.class);
binder.bind(RoutingDruidClient.class).in(LazySingleton.class);
binder.bind(BrokerSelector.class).in(ManageLifecycle.class);
binder.bind(QuerySegmentWalker.class).to(RouterQuerySegmentWalker.class).in(LazySingleton.class);
binder.bind(JettyServerInitializer.class).to(QueryJettyServerInitializer.class).in(LazySingleton.class);
Jerseys.addResource(binder, QueryResource.class);
LifecycleModule.register(binder, QueryResource.class);
binder.bind(JettyServerInitializer.class).to(RouterJettyServerInitializer.class).in(LazySingleton.class);
LifecycleModule.register(binder, Server.class);
DiscoveryModule.register(binder, Self.class);
......@@ -92,7 +85,7 @@ public class CliRouter extends ServerRunnable
@Provides
@ManageLifecycle
public ServerDiscoverySelector getCoordinatorServerDiscoverySelector(
TierConfig config,
TieredBrokerConfig config,
ServerDiscoveryFactory factory
)
......
......@@ -38,16 +38,13 @@ public class QueryJettyServerInitializer implements JettyServerInitializer
@Override
public void initialize(Server server, Injector injector)
{
final ServletContextHandler queries = new ServletContextHandler(ServletContextHandler.SESSIONS);
queries.setResourceBase("/");
final ServletContextHandler root = new ServletContextHandler(ServletContextHandler.SESSIONS);
root.addServlet(new ServletHolder(new DefaultServlet()), "/*");
root.addFilter(GzipFilter.class, "/*", null);
root.addFilter(GuiceFilter.class, "/*", null);
final HandlerList handlerList = new HandlerList();
handlerList.setHandlers(new Handler[]{queries, root, new DefaultHandler()});
handlerList.setHandlers(new Handler[]{root, new DefaultHandler()});
server.setHandler(handlerList);
}
}
/*
* Druid - a distributed column store.
* Copyright (C) 2012, 2013 Metamarkets Group Inc.
*
* This program is free software; you can redistribute it and/or
* modify it under the terms of the GNU General Public License
* as published by the Free Software Foundation; either version 2
* of the License, or (at your option) any later version.
*
* This program is distributed in the hope that it will be useful,
* but WITHOUT ANY WARRANTY; without even the implied warranty of
* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
* GNU General Public License for more details.
*
* You should have received a copy of the GNU General Public License
* along with this program; if not, write to the Free Software
* Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA.
*/
package io.druid.cli;
import com.fasterxml.jackson.databind.ObjectMapper;
import com.google.inject.Inject;
import com.google.inject.Injector;
import com.google.inject.servlet.GuiceFilter;
import com.metamx.emitter.service.ServiceEmitter;
import io.druid.client.RoutingDruidClient;
import io.druid.guice.annotations.Json;
import io.druid.guice.annotations.Smile;
import io.druid.server.AsyncQueryForwardingServlet;
import io.druid.server.QueryIDProvider;
import io.druid.server.initialization.JettyServerInitializer;
import io.druid.server.log.RequestLogger;
import io.druid.server.router.QueryHostFinder;
import org.eclipse.jetty.server.Handler;
import org.eclipse.jetty.server.Server;
import org.eclipse.jetty.server.handler.DefaultHandler;
import org.eclipse.jetty.server.handler.HandlerList;
import org.eclipse.jetty.servlet.DefaultServlet;
import org.eclipse.jetty.servlet.ServletContextHandler;
import org.eclipse.jetty.servlet.ServletHolder;
import org.eclipse.jetty.servlets.GzipFilter;
/**
*/
public class RouterJettyServerInitializer implements JettyServerInitializer
{
private final ObjectMapper jsonMapper;
private final ObjectMapper smileMapper;
private final QueryHostFinder hostFinder;
private final RoutingDruidClient routingDruidClient;
private final ServiceEmitter emitter;
private final RequestLogger requestLogger;
private final QueryIDProvider idProvider;
@Inject
public RouterJettyServerInitializer(
@Json ObjectMapper jsonMapper,
@Smile ObjectMapper smileMapper,
QueryHostFinder hostFinder,
RoutingDruidClient routingDruidClient,
ServiceEmitter emitter,
RequestLogger requestLogger,
QueryIDProvider idProvider
)
{
this.jsonMapper = jsonMapper;
this.smileMapper = smileMapper;
this.hostFinder = hostFinder;
this.routingDruidClient = routingDruidClient;
this.emitter = emitter;
this.requestLogger = requestLogger;
this.idProvider = idProvider;
}
@Override
public void initialize(Server server, Injector injector)
{
final ServletContextHandler queries = new ServletContextHandler(ServletContextHandler.SESSIONS);
queries.addServlet(
new ServletHolder(
new AsyncQueryForwardingServlet(
jsonMapper,
smileMapper,
hostFinder,
routingDruidClient,
emitter,
requestLogger,
idProvider
)
), "/druid/v2/*"
);
queries.addFilter(GzipFilter.class, "/druid/v2/*", null);
final ServletContextHandler root = new ServletContextHandler(ServletContextHandler.SESSIONS);
root.addServlet(new ServletHolder(new DefaultServlet()), "/*");
root.addFilter(GzipFilter.class, "/*", null);
root.addFilter(GuiceFilter.class, "/*", null);
final HandlerList handlerList = new HandlerList();
handlerList.setHandlers(new Handler[]{queries, root, new DefaultHandler()});
server.setHandler(handlerList);
}
}
Markdown is supported
0% .
You are about to add 0 people to the discussion. Proceed with caution.
先完成此消息的编辑!
想要评论请 注册