Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Forever310
druid
提交
f5dc75ef
D
druid
项目概览
Forever310
/
druid
与 Fork 源项目一致
从无法访问的项目Fork
通知
3
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
druid
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
体验新版 GitCode,发现更多精彩内容 >>
提交
f5dc75ef
编写于
4月 07, 2014
作者:
F
fjy
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #464 from metamx/fix-context
Fix context to be backwards compatible with String values
上级
e8c41ab3
dda09633
变更
16
隐藏空白更改
内联
并排
Showing
16 changed file
with
173 addition
and
90 deletion
+173
-90
processing/src/main/java/io/druid/query/BaseQuery.java
processing/src/main/java/io/druid/query/BaseQuery.java
+56
-0
processing/src/main/java/io/druid/query/BySegmentQueryRunner.java
...ng/src/main/java/io/druid/query/BySegmentQueryRunner.java
+1
-1
processing/src/main/java/io/druid/query/BySegmentSkippingQueryRunner.java
...ain/java/io/druid/query/BySegmentSkippingQueryRunner.java
+1
-1
processing/src/main/java/io/druid/query/ChainedExecutionQueryRunner.java
...main/java/io/druid/query/ChainedExecutionQueryRunner.java
+1
-1
processing/src/main/java/io/druid/query/FinalizeResultsQueryRunner.java
.../main/java/io/druid/query/FinalizeResultsQueryRunner.java
+4
-5
processing/src/main/java/io/druid/query/GroupByParallelQueryRunner.java
.../main/java/io/druid/query/GroupByParallelQueryRunner.java
+1
-1
processing/src/main/java/io/druid/query/Query.java
processing/src/main/java/io/druid/query/Query.java
+7
-0
processing/src/main/java/io/druid/query/search/SearchQueryQueryToolChest.java
...java/io/druid/query/search/SearchQueryQueryToolChest.java
+1
-1
processing/src/main/java/io/druid/query/topn/TopNQueryQueryToolChest.java
...ain/java/io/druid/query/topn/TopNQueryQueryToolChest.java
+1
-1
server/src/main/java/io/druid/client/CachePopulatingQueryRunner.java
...main/java/io/druid/client/CachePopulatingQueryRunner.java
+1
-1
server/src/main/java/io/druid/client/CachingClusteredClient.java
...src/main/java/io/druid/client/CachingClusteredClient.java
+7
-8
server/src/main/java/io/druid/client/DirectDruidClient.java
server/src/main/java/io/druid/client/DirectDruidClient.java
+1
-1
server/src/main/java/io/druid/client/RoutingDruidClient.java
server/src/main/java/io/druid/client/RoutingDruidClient.java
+1
-2
server/src/main/java/io/druid/server/AsyncQueryForwardingServlet.java
...ain/java/io/druid/server/AsyncQueryForwardingServlet.java
+14
-3
server/src/main/java/io/druid/server/QueryIDProvider.java
server/src/main/java/io/druid/server/QueryIDProvider.java
+1
-1
server/src/test/java/io/druid/client/CachingClusteredClientTest.java
...test/java/io/druid/client/CachingClusteredClientTest.java
+75
-63
未找到文件。
processing/src/main/java/io/druid/query/BaseQuery.java
浏览文件 @
f5dc75ef
...
...
@@ -23,6 +23,7 @@ import com.fasterxml.jackson.annotation.JsonProperty;
import
com.google.common.base.Preconditions
;
import
com.google.common.collect.ImmutableMap
;
import
com.google.common.collect.Maps
;
import
com.metamx.common.ISE
;
import
com.metamx.common.guava.Sequence
;
import
io.druid.query.spec.QuerySegmentSpec
;
import
org.joda.time.Duration
;
...
...
@@ -120,6 +121,61 @@ public abstract class BaseQuery<T> implements Query<T>
return
retVal
==
null
?
defaultValue
:
retVal
;
}
@Override
public
int
getContextPriority
(
int
defaultValue
)
{
Object
val
=
context
.
get
(
"priority"
);
if
(
val
==
null
)
{
return
defaultValue
;
}
if
(
val
instanceof
String
)
{
return
Integer
.
parseInt
((
String
)
val
);
}
else
if
(
val
instanceof
Integer
)
{
return
(
int
)
val
;
}
else
{
throw
new
ISE
(
"Unknown type [%s]"
,
val
.
getClass
());
}
}
@Override
public
boolean
getContextBySegment
(
boolean
defaultValue
)
{
return
parseBoolean
(
"bySegment"
,
defaultValue
);
}
@Override
public
boolean
getContextPopulateCache
(
boolean
defaultValue
)
{
return
parseBoolean
(
"populateCache"
,
defaultValue
);
}
@Override
public
boolean
getContextUseCache
(
boolean
defaultValue
)
{
return
parseBoolean
(
"useCache"
,
defaultValue
);
}
@Override
public
boolean
getContextFinalize
(
boolean
defaultValue
)
{
return
parseBoolean
(
"finalize"
,
defaultValue
);
}
private
boolean
parseBoolean
(
String
key
,
boolean
defaultValue
)
{
Object
val
=
context
.
get
(
key
);
if
(
val
==
null
)
{
return
defaultValue
;
}
if
(
val
instanceof
String
)
{
return
Boolean
.
parseBoolean
((
String
)
val
);
}
else
if
(
val
instanceof
Boolean
)
{
return
(
boolean
)
val
;
}
else
{
throw
new
ISE
(
"Unknown type [%s]. Cannot parse!"
,
val
.
getClass
());
}
}
protected
Map
<
String
,
Object
>
computeOverridenContext
(
Map
<
String
,
Object
>
overrides
)
{
Map
<
String
,
Object
>
overridden
=
Maps
.
newTreeMap
();
...
...
processing/src/main/java/io/druid/query/BySegmentQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -53,7 +53,7 @@ public class BySegmentQueryRunner<T> implements QueryRunner<T>
@SuppressWarnings
(
"unchecked"
)
public
Sequence
<
T
>
run
(
final
Query
<
T
>
query
)
{
if
(
Boolean
.
parseBoolean
(
query
.<
String
>
getContextValue
(
"bySegment"
)
))
{
if
(
query
.
getContextBySegment
(
false
))
{
final
Sequence
<
T
>
baseSequence
=
base
.
run
(
query
);
return
new
Sequence
<
T
>()
{
...
...
processing/src/main/java/io/druid/query/BySegmentSkippingQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -37,7 +37,7 @@ public abstract class BySegmentSkippingQueryRunner<T> implements QueryRunner<T>
@Override
public
Sequence
<
T
>
run
(
Query
<
T
>
query
)
{
if
(
Boolean
.
parseBoolean
(
query
.<
String
>
getContextValue
(
"bySegment"
)
))
{
if
(
query
.
getContextBySegment
(
false
))
{
return
baseRunner
.
run
(
query
);
}
...
...
processing/src/main/java/io/druid/query/ChainedExecutionQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -83,7 +83,7 @@ public class ChainedExecutionQueryRunner<T> implements QueryRunner<T>
@Override
public
Sequence
<
T
>
run
(
final
Query
<
T
>
query
)
{
final
int
priority
=
Integer
.
parseInt
((
String
)
query
.
getContextValue
(
"priority"
,
"0"
)
);
final
int
priority
=
query
.
getContextValue
(
"priority"
,
0
);
return
new
BaseSequence
<
T
,
Iterator
<
T
>>(
new
BaseSequence
.
IteratorMaker
<
T
,
Iterator
<
T
>>()
...
...
processing/src/main/java/io/druid/query/FinalizeResultsQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -48,8 +48,8 @@ public class FinalizeResultsQueryRunner<T> implements QueryRunner<T>
@Override
public
Sequence
<
T
>
run
(
final
Query
<
T
>
query
)
{
final
boolean
isBySegment
=
Boolean
.
parseBoolean
(
query
.<
String
>
getContextValue
(
"bySegment"
)
);
final
boolean
shouldFinalize
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
"finalize"
,
"true"
)
);
final
boolean
isBySegment
=
query
.
getContextBySegment
(
false
);
final
boolean
shouldFinalize
=
query
.
getContextFinalize
(
true
);
if
(
shouldFinalize
)
{
Function
<
T
,
T
>
finalizerFn
;
if
(
isBySegment
)
{
...
...
@@ -84,8 +84,7 @@ public class FinalizeResultsQueryRunner<T> implements QueryRunner<T>
);
}
};
}
else
{
}
else
{
finalizerFn
=
toolChest
.
makeMetricManipulatorFn
(
query
,
new
MetricManipulationFn
()
...
...
@@ -100,7 +99,7 @@ public class FinalizeResultsQueryRunner<T> implements QueryRunner<T>
}
return
Sequences
.
map
(
baseRunner
.
run
(
query
.
withOverriddenContext
(
ImmutableMap
.<
String
,
Object
>
of
(
"finalize"
,
"false"
))),
baseRunner
.
run
(
query
.
withOverriddenContext
(
ImmutableMap
.<
String
,
Object
>
of
(
"finalize"
,
false
))),
finalizerFn
);
}
...
...
processing/src/main/java/io/druid/query/GroupByParallelQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -83,7 +83,7 @@ public class GroupByParallelQueryRunner implements QueryRunner<Row>
query
,
configSupplier
.
get
()
);
final
int
priority
=
Integer
.
parseInt
((
String
)
query
.
getContextValue
(
"priority"
,
"0"
)
);
final
int
priority
=
query
.
getContextPriority
(
0
);
if
(
Iterables
.
isEmpty
(
queryables
))
{
log
.
warn
(
"No queryables found."
);
...
...
processing/src/main/java/io/druid/query/Query.java
浏览文件 @
f5dc75ef
...
...
@@ -74,6 +74,13 @@ public interface Query<T>
public
<
ContextType
>
ContextType
getContextValue
(
String
key
,
ContextType
defaultValue
);
// For backwards compatibility
public
int
getContextPriority
(
int
defaultValue
);
public
boolean
getContextBySegment
(
boolean
defaultValue
);
public
boolean
getContextPopulateCache
(
boolean
defaultValue
);
public
boolean
getContextUseCache
(
boolean
defaultValue
);
public
boolean
getContextFinalize
(
boolean
defaultValue
);
public
Query
<
T
>
withOverriddenContext
(
Map
<
String
,
Object
>
contextOverride
);
public
Query
<
T
>
withQuerySegmentSpec
(
QuerySegmentSpec
spec
);
...
...
processing/src/main/java/io/druid/query/search/SearchQueryQueryToolChest.java
浏览文件 @
f5dc75ef
...
...
@@ -294,7 +294,7 @@ public class SearchQueryQueryToolChest extends QueryToolChest<Result<SearchResul
return
runner
.
run
(
query
);
}
final
boolean
isBySegment
=
Boolean
.
parseBoolean
((
String
)
query
.
getContextValue
(
"bySegment"
,
"false"
)
);
final
boolean
isBySegment
=
query
.
getContextBySegment
(
false
);
return
Sequences
.
map
(
runner
.
run
(
query
.
withLimit
(
config
.
getMaxSearchLimit
())),
...
...
processing/src/main/java/io/druid/query/topn/TopNQueryQueryToolChest.java
浏览文件 @
f5dc75ef
...
...
@@ -339,7 +339,7 @@ public class TopNQueryQueryToolChest extends QueryToolChest<Result<TopNResultVal
return
runner
.
run
(
query
);
}
final
boolean
isBySegment
=
Boolean
.
parseBoolean
((
String
)
query
.
getContextValue
(
"bySegment"
,
"false"
)
);
final
boolean
isBySegment
=
query
.
getContextBySegment
(
false
);
return
Sequences
.
map
(
runner
.
run
(
query
.
withThreshold
(
minTopNThreshold
)),
...
...
server/src/main/java/io/druid/client/CachePopulatingQueryRunner.java
浏览文件 @
f5dc75ef
...
...
@@ -70,7 +70,7 @@ public class CachePopulatingQueryRunner<T> implements QueryRunner<T>
final
CacheStrategy
strategy
=
toolChest
.
getCacheStrategy
(
query
);
final
boolean
populateCache
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
CacheConfig
.
POPULATE_CACHE
,
"true"
)
)
final
boolean
populateCache
=
query
.
getContextPopulateCache
(
true
)
&&
strategy
!=
null
&&
cacheConfig
.
isPopulateCache
()
// historical only populates distributed cache since the cache lookups are done at broker.
...
...
server/src/main/java/io/druid/client/CachingClusteredClient.java
浏览文件 @
f5dc75ef
...
...
@@ -62,7 +62,6 @@ import io.druid.timeline.partition.PartitionChunk;
import
org.joda.time.DateTime
;
import
org.joda.time.Interval
;
import
javax.annotation.Nullable
;
import
java.io.IOException
;
import
java.util.ArrayList
;
import
java.util.Collections
;
...
...
@@ -125,24 +124,24 @@ public class CachingClusteredClient<T> implements QueryRunner<T>
final
List
<
Pair
<
DateTime
,
byte
[]>>
cachedResults
=
Lists
.
newArrayList
();
final
Map
<
String
,
CachePopulator
>
cachePopulatorMap
=
Maps
.
newHashMap
();
final
boolean
useCache
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
CacheConfig
.
USE_CACHE
,
"true"
)
)
final
boolean
useCache
=
query
.
getContextUseCache
(
true
)
&&
strategy
!=
null
&&
cacheConfig
.
isUseCache
();
final
boolean
populateCache
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
CacheConfig
.
POPULATE_CACHE
,
"true"
)
)
final
boolean
populateCache
=
query
.
getContextPopulateCache
(
true
)
&&
strategy
!=
null
&&
cacheConfig
.
isPopulateCache
();
final
boolean
isBySegment
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
"bySegment"
,
"false"
)
);
final
boolean
isBySegment
=
query
.
getContextBySegment
(
false
);
ImmutableMap
.
Builder
<
String
,
Object
>
contextBuilder
=
new
ImmutableMap
.
Builder
<>();
final
String
priority
=
query
.
getContextValue
(
"priority"
,
"0"
);
final
int
priority
=
query
.
getContextPriority
(
0
);
contextBuilder
.
put
(
"priority"
,
priority
);
if
(
populateCache
)
{
contextBuilder
.
put
(
CacheConfig
.
POPULATE_CACHE
,
"false"
);
contextBuilder
.
put
(
"bySegment"
,
"true"
);
contextBuilder
.
put
(
CacheConfig
.
POPULATE_CACHE
,
false
);
contextBuilder
.
put
(
"bySegment"
,
true
);
}
contextBuilder
.
put
(
"intermediate"
,
"true"
);
contextBuilder
.
put
(
"intermediate"
,
true
);
final
Query
<
T
>
rewrittenQuery
=
query
.
withOverriddenContext
(
contextBuilder
.
build
());
...
...
server/src/main/java/io/druid/client/DirectDruidClient.java
浏览文件 @
f5dc75ef
...
...
@@ -106,7 +106,7 @@ public class DirectDruidClient<T> implements QueryRunner<T>
public
Sequence
<
T
>
run
(
Query
<
T
>
query
)
{
QueryToolChest
<
T
,
Query
<
T
>>
toolChest
=
warehouse
.
getToolChest
(
query
);
boolean
isBySegment
=
Boolean
.
parseBoolean
(
query
.
getContextValue
(
"bySegment"
,
"false"
)
);
boolean
isBySegment
=
query
.
getContextBySegment
(
false
);
Pair
<
JavaType
,
JavaType
>
types
=
typesMap
.
get
(
query
.
getClass
());
if
(
types
==
null
)
{
...
...
server/src/main/java/io/druid/client/RoutingDruidClient.java
浏览文件 @
f5dc75ef
...
...
@@ -68,13 +68,12 @@ public class RoutingDruidClient<IntermediateType, FinalType>
}
public
ListenableFuture
<
FinalType
>
run
(
String
host
,
String
url
,
Query
query
,
HttpResponseHandler
<
IntermediateType
,
FinalType
>
responseHandler
)
{
final
ListenableFuture
<
FinalType
>
future
;
final
String
url
=
String
.
format
(
"http://%s/druid/v2/"
,
host
);
try
{
log
.
debug
(
"Querying url[%s]"
,
url
);
...
...
server/src/main/java/io/druid/server/AsyncQueryForwardingServlet.java
浏览文件 @
f5dc75ef
...
...
@@ -106,8 +106,6 @@ public class AsyncQueryForwardingServlet extends HttpServlet
}
req
.
setAttribute
(
DISPATCHED
,
true
);
resp
.
setStatus
(
200
);
resp
.
setContentType
(
"application/x-javascript"
);
query
=
objectMapper
.
readValue
(
req
.
getInputStream
(),
Query
.
class
);
queryId
=
query
.
getId
();
...
...
@@ -132,6 +130,9 @@ public class AsyncQueryForwardingServlet extends HttpServlet
@Override
public
ClientResponse
<
OutputStream
>
handleResponse
(
HttpResponse
response
)
{
resp
.
setStatus
(
response
.
getStatus
().
getCode
());
resp
.
setContentType
(
"application/x-javascript"
);
byte
[]
bytes
=
getContentBytes
(
response
.
getContent
());
if
(
bytes
.
length
>
0
)
{
try
{
...
...
@@ -209,7 +210,7 @@ public class AsyncQueryForwardingServlet extends HttpServlet
@Override
public
void
run
()
{
routingDruidClient
.
run
(
host
,
theQuery
,
responseHandler
);
routingDruidClient
.
run
(
makeUrl
(
host
,
req
)
,
theQuery
,
responseHandler
);
}
}
);
...
...
@@ -235,4 +236,14 @@ public class AsyncQueryForwardingServlet extends HttpServlet
.
emit
();
}
}
private
String
makeUrl
(
String
host
,
HttpServletRequest
req
)
{
String
queryString
=
req
.
getQueryString
();
if
(
queryString
==
null
)
{
return
String
.
format
(
"http://%s%s"
,
host
,
req
.
getRequestURI
());
}
return
String
.
format
(
"http://%s%s?%s"
,
host
,
req
.
getRequestURI
(),
req
.
getQueryString
());
}
}
server/src/main/java/io/druid/server/QueryIDProvider.java
浏览文件 @
f5dc75ef
...
...
@@ -44,7 +44,7 @@ public class QueryIDProvider
return
String
.
format
(
"%s_%s_%s_%s_%s"
,
query
.
getDataSource
(),
query
.
get
Duration
(),
query
.
get
Intervals
(),
host
,
new
DateTime
(),
id
.
incrementAndGet
()
...
...
server/src/test/java/io/druid/client/CachingClusteredClientTest.java
浏览文件 @
f5dc75ef
...
...
@@ -214,13 +214,13 @@ public class CachingClusteredClientTest
public
void
testTimeseriesCaching
()
throws
Exception
{
final
Druids
.
TimeseriesQueryBuilder
builder
=
Druids
.
newTimeseriesQueryBuilder
()
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
)
.
context
(
CONTEXT
);
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
)
.
context
(
CONTEXT
);
testQueryCaching
(
builder
.
build
(),
...
...
@@ -265,9 +265,9 @@ public class CachingClusteredClientTest
),
client
.
run
(
builder
.
intervals
(
"2011-01-01/2011-01-10"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
)
);
}
...
...
@@ -277,13 +277,13 @@ public class CachingClusteredClientTest
public
void
testTimeseriesCachingTimeZone
()
throws
Exception
{
final
Druids
.
TimeseriesQueryBuilder
builder
=
Druids
.
newTimeseriesQueryBuilder
()
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
PT1H_TZ_GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
)
.
context
(
CONTEXT
);
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
PT1H_TZ_GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
)
.
context
(
CONTEXT
);
testQueryCaching
(
builder
.
build
(),
...
...
@@ -305,9 +305,9 @@ public class CachingClusteredClientTest
),
client
.
run
(
builder
.
intervals
(
"2011-11-04/2011-11-08"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
)
);
}
...
...
@@ -316,18 +316,22 @@ public class CachingClusteredClientTest
public
void
testDisableUseCache
()
throws
Exception
{
final
Druids
.
TimeseriesQueryBuilder
builder
=
Druids
.
newTimeseriesQueryBuilder
()
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
);
.
dataSource
(
DATA_SOURCE
)
.
intervals
(
SEG_SPEC
)
.
filters
(
DIM_FILTER
)
.
granularity
(
GRANULARITY
)
.
aggregators
(
AGGS
)
.
postAggregators
(
POST_AGGS
);
testQueryCaching
(
1
,
true
,
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"false"
,
"populateCache"
,
"true"
)).
build
(),
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"false"
,
"populateCache"
,
"true"
)
).
build
(),
new
Interval
(
"2011-01-01/2011-01-02"
),
makeTimeResults
(
new
DateTime
(
"2011-01-01"
),
50
,
5000
)
);
...
...
@@ -340,8 +344,12 @@ public class CachingClusteredClientTest
testQueryCaching
(
1
,
false
,
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"false"
,
"populateCache"
,
"false"
)).
build
(),
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"false"
,
"populateCache"
,
"false"
)
).
build
(),
new
Interval
(
"2011-01-01/2011-01-02"
),
makeTimeResults
(
new
DateTime
(
"2011-01-01"
),
50
,
5000
)
);
...
...
@@ -352,8 +360,12 @@ public class CachingClusteredClientTest
testQueryCaching
(
1
,
false
,
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"true"
,
"populateCache"
,
"false"
)).
build
(),
builder
.
context
(
ImmutableMap
.<
String
,
Object
>
of
(
"useCache"
,
"true"
,
"populateCache"
,
"false"
)
).
build
(),
new
Interval
(
"2011-01-01/2011-01-02"
),
makeTimeResults
(
new
DateTime
(
"2011-01-01"
),
50
,
5000
)
);
...
...
@@ -422,10 +434,10 @@ public class CachingClusteredClientTest
),
client
.
run
(
builder
.
intervals
(
"2011-01-01/2011-01-10"
)
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
)
);
}
...
...
@@ -467,10 +479,10 @@ public class CachingClusteredClientTest
),
client
.
run
(
builder
.
intervals
(
"2011-11-04/2011-11-08"
)
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
)
);
}
...
...
@@ -533,10 +545,10 @@ public class CachingClusteredClientTest
),
client
.
run
(
builder
.
intervals
(
"2011-01-01/2011-01-10"
)
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
.
metric
(
"imps"
)
.
aggregators
(
RENAMED_AGGS
)
.
postAggregators
(
RENAMED_POST_AGGS
)
.
build
()
)
);
}
...
...
@@ -638,8 +650,8 @@ public class CachingClusteredClientTest
EasyMock
.
expect
(
serverView
.
getQueryRunner
(
server
))
.
andReturn
(
expectations
.
getQueryRunner
())
.
once
();
.
andReturn
(
expectations
.
getQueryRunner
())
.
once
();
final
Capture
<?
extends
Query
>
capture
=
new
Capture
();
queryCaptures
.
add
(
capture
);
...
...
@@ -656,8 +668,8 @@ public class CachingClusteredClientTest
}
EasyMock
.
expect
(
queryable
.
run
(
EasyMock
.
capture
(
capture
)))
.
andReturn
(
toQueryableTimeseriesResults
(
expectBySegment
,
segmentIds
,
intervals
,
results
))
.
once
();
.
andReturn
(
toQueryableTimeseriesResults
(
expectBySegment
,
segmentIds
,
intervals
,
results
))
.
once
();
}
else
if
(
query
instanceof
TopNQuery
)
{
List
<
String
>
segmentIds
=
Lists
.
newArrayList
();
...
...
@@ -669,8 +681,8 @@ public class CachingClusteredClientTest
results
.
add
(
expectation
.
getResults
());
}
EasyMock
.
expect
(
queryable
.
run
(
EasyMock
.
capture
(
capture
)))
.
andReturn
(
toQueryableTopNResults
(
segmentIds
,
intervals
,
results
))
.
once
();
.
andReturn
(
toQueryableTopNResults
(
segmentIds
,
intervals
,
results
))
.
once
();
}
else
if
(
query
instanceof
SearchQuery
)
{
List
<
String
>
segmentIds
=
Lists
.
newArrayList
();
List
<
Interval
>
intervals
=
Lists
.
newArrayList
();
...
...
@@ -681,8 +693,8 @@ public class CachingClusteredClientTest
results
.
add
(
expectation
.
getResults
());
}
EasyMock
.
expect
(
queryable
.
run
(
EasyMock
.
capture
(
capture
)))
.
andReturn
(
toQueryableSearchResults
(
segmentIds
,
intervals
,
results
))
.
once
();
.
andReturn
(
toQueryableSearchResults
(
segmentIds
,
intervals
,
results
))
.
once
();
}
else
if
(
query
instanceof
TimeBoundaryQuery
)
{
List
<
String
>
segmentIds
=
Lists
.
newArrayList
();
List
<
Interval
>
intervals
=
Lists
.
newArrayList
();
...
...
@@ -693,8 +705,8 @@ public class CachingClusteredClientTest
results
.
add
(
expectation
.
getResults
());
}
EasyMock
.
expect
(
queryable
.
run
(
EasyMock
.
capture
(
capture
)))
.
andReturn
(
toQueryableTimeBoundaryResults
(
segmentIds
,
intervals
,
results
))
.
once
();
.
andReturn
(
toQueryableTimeBoundaryResults
(
segmentIds
,
intervals
,
results
))
.
once
();
}
else
{
throw
new
ISE
(
"Unknown query type[%s]"
,
query
.
getClass
());
}
...
...
@@ -762,11 +774,11 @@ public class CachingClusteredClientTest
for
(
Capture
queryCapture
:
queryCaptures
)
{
Query
capturedQuery
=
(
Query
)
queryCapture
.
getValue
();
if
(
expectBySegment
)
{
Assert
.
assertEquals
(
"true"
,
capturedQuery
.
getContextValue
(
"bySegment"
));
Assert
.
assertEquals
(
true
,
capturedQuery
.<
Boolean
>
getContextValue
(
"bySegment"
));
}
else
{
Assert
.
assertTrue
(
capturedQuery
.
getContextValue
(
"bySegment"
)
==
null
||
capturedQuery
.
getContextValue
(
"bySegment"
).
equals
(
"false"
)
capturedQuery
.
getContextValue
(
"bySegment"
).
equals
(
false
)
);
}
}
...
...
@@ -1160,13 +1172,13 @@ public class CachingClusteredClientTest
return
new
CachingClusteredClient
(
new
MapQueryToolChestWarehouse
(
ImmutableMap
.<
Class
<?
extends
Query
>,
QueryToolChest
>
builder
()
.
put
(
TimeseriesQuery
.
class
,
new
TimeseriesQueryQueryToolChest
(
new
QueryConfig
())
)
.
put
(
TopNQuery
.
class
,
new
TopNQueryQueryToolChest
(
new
TopNQueryConfig
()))
.
put
(
SearchQuery
.
class
,
new
SearchQueryQueryToolChest
(
new
SearchQueryConfig
()))
.
build
()
.
put
(
TimeseriesQuery
.
class
,
new
TimeseriesQueryQueryToolChest
(
new
QueryConfig
())
)
.
put
(
TopNQuery
.
class
,
new
TopNQueryQueryToolChest
(
new
TopNQueryConfig
()))
.
put
(
SearchQuery
.
class
,
new
SearchQueryQueryToolChest
(
new
SearchQueryConfig
()))
.
build
()
),
new
TimelineServerView
()
{
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录