Skip to content
体验新版
项目
组织
正在加载...
登录
切换导航
打开侧边栏
Forever310
druid
提交
0d32466c
D
druid
项目概览
Forever310
/
druid
与 Fork 源项目一致
从无法访问的项目Fork
通知
3
Star
0
Fork
0
代码
文件
提交
分支
Tags
贡献者
分支图
Diff
Issue
0
列表
看板
标记
里程碑
合并请求
0
DevOps
流水线
流水线任务
计划
Wiki
0
Wiki
分析
仓库
DevOps
项目成员
Pages
D
druid
项目概览
项目概览
详情
发布
仓库
仓库
文件
提交
分支
标签
贡献者
分支图
比较
Issue
0
Issue
0
列表
看板
标记
里程碑
合并请求
0
合并请求
0
Pages
DevOps
DevOps
流水线
流水线任务
计划
分析
分析
仓库分析
DevOps
Wiki
0
Wiki
成员
成员
收起侧边栏
关闭侧边栏
动态
分支图
创建新Issue
流水线任务
提交
Issue看板
前往新版Gitcode,体验更适合开发者的 AI 搜索 >>
提交
0d32466c
编写于
8月 22, 2014
作者:
F
fjy
浏览文件
操作
浏览文件
下载
差异文件
Merge pull request #698 from metamx/hll-groupby-test
HLL GroupBy fixes + tests
上级
00f7077f
8b797499
变更
3
隐藏空白更改
内联
并排
Showing
3 changed file
with
67 addition
and
27 deletion
+67
-27
processing/src/main/java/io/druid/query/aggregation/hyperloglog/HyperLogLogCollector.java
...d/query/aggregation/hyperloglog/HyperLogLogCollector.java
+29
-3
processing/src/main/java/io/druid/query/groupby/GroupByQueryQueryToolChest.java
...va/io/druid/query/groupby/GroupByQueryQueryToolChest.java
+1
-1
server/src/test/java/io/druid/client/CachingClusteredClientTest.java
...test/java/io/druid/client/CachingClusteredClientTest.java
+37
-23
未找到文件。
processing/src/main/java/io/druid/query/aggregation/hyperloglog/HyperLogLogCollector.java
浏览文件 @
0d32466c
...
...
@@ -195,6 +195,13 @@ public abstract class HyperLogLogCollector implements Comparable<HyperLogLogColl
return
applyCorrection
(
e
,
zeroCount
);
}
/**
* Checks if the payload for the given ByteBuffer is sparse or not.
* The given buffer must be positioned at getPayloadBytePosition() prior to calling isSparse
*
* @param buffer
* @return
*/
private
static
boolean
isSparse
(
ByteBuffer
buffer
)
{
return
buffer
.
remaining
()
!=
NUM_BYTES_FOR_BUCKETS
;
...
...
@@ -495,13 +502,32 @@ public abstract class HyperLogLogCollector implements Comparable<HyperLogLogColl
return
false
;
}
HyperLogLogCollector
collector
=
(
HyperLogLogCollector
)
o
;
ByteBuffer
otherBuffer
=
((
HyperLogLogCollector
)
o
).
storageBuffer
;
if
(
storageBuffer
!=
null
?
!
storageBuffer
.
equals
(
collector
.
storageBuffer
)
:
collector
.
storage
Buffer
!=
null
)
{
if
(
storageBuffer
!=
null
?
false
:
other
Buffer
!=
null
)
{
return
false
;
}
return
true
;
if
(
storageBuffer
==
null
&&
otherBuffer
==
null
)
{
return
true
;
}
final
ByteBuffer
denseStorageBuffer
;
if
(
storageBuffer
.
remaining
()
!=
getNumBytesForDenseStorage
())
{
HyperLogLogCollector
denseCollector
=
HyperLogLogCollector
.
makeCollector
(
storageBuffer
);
denseCollector
.
convertToDenseStorage
();
denseStorageBuffer
=
denseCollector
.
storageBuffer
;
}
else
{
denseStorageBuffer
=
storageBuffer
;
}
if
(
otherBuffer
.
remaining
()
!=
getNumBytesForDenseStorage
())
{
HyperLogLogCollector
otherCollector
=
HyperLogLogCollector
.
makeCollector
(
otherBuffer
);
otherCollector
.
convertToDenseStorage
();
otherBuffer
=
otherCollector
.
storageBuffer
;
}
return
denseStorageBuffer
.
equals
(
otherBuffer
);
}
@Override
...
...
processing/src/main/java/io/druid/query/groupby/GroupByQueryQueryToolChest.java
浏览文件 @
0d32466c
...
...
@@ -355,7 +355,7 @@ public class GroupByQueryQueryToolChest extends QueryToolChest<Row, GroupByQuery
while
(
aggsIter
.
hasNext
())
{
final
AggregatorFactory
factory
=
aggsIter
.
next
();
Object
agg
=
event
.
remove
(
factory
.
getName
());
Object
agg
=
event
.
get
(
factory
.
getName
());
if
(
agg
!=
null
)
{
event
.
put
(
factory
.
getName
(),
factory
.
deserialize
(
agg
));
}
...
...
server/src/test/java/io/druid/client/CachingClusteredClientTest.java
浏览文件 @
0d32466c
...
...
@@ -22,6 +22,7 @@ package io.druid.client;
import
com.fasterxml.jackson.annotation.JsonProperty
;
import
com.fasterxml.jackson.databind.annotation.JsonSerialize
;
import
com.fasterxml.jackson.dataformat.smile.SmileFactory
;
import
com.google.common.base.Charsets
;
import
com.google.common.base.Function
;
import
com.google.common.base.Supplier
;
import
com.google.common.base.Suppliers
;
...
...
@@ -31,6 +32,8 @@ import com.google.common.collect.Iterables;
import
com.google.common.collect.Lists
;
import
com.google.common.collect.Maps
;
import
com.google.common.collect.Ordering
;
import
com.google.common.hash.HashFunction
;
import
com.google.common.hash.Hashing
;
import
com.metamx.common.ISE
;
import
com.metamx.common.guava.FunctionalIterable
;
import
com.metamx.common.guava.MergeIterable
;
...
...
@@ -65,6 +68,8 @@ import io.druid.query.aggregation.AggregatorFactory;
import
io.druid.query.aggregation.CountAggregatorFactory
;
import
io.druid.query.aggregation.LongSumAggregatorFactory
;
import
io.druid.query.aggregation.PostAggregator
;
import
io.druid.query.aggregation.hyperloglog.HyperLogLogCollector
;
import
io.druid.query.aggregation.hyperloglog.HyperUniquesAggregatorFactory
;
import
io.druid.query.aggregation.post.ArithmeticPostAggregator
;
import
io.druid.query.aggregation.post.ConstantPostAggregator
;
import
io.druid.query.aggregation.post.FieldAccessPostAggregator
;
...
...
@@ -806,41 +811,50 @@ public class CachingClusteredClientTest
@Test
public
void
testGroupByCaching
()
throws
Exception
{
List
<
AggregatorFactory
>
aggsWithUniques
=
ImmutableList
.<
AggregatorFactory
>
builder
().
addAll
(
AGGS
)
.
add
(
new
HyperUniquesAggregatorFactory
(
"uniques"
,
"uniques"
)).
build
();
final
HashFunction
hashFn
=
Hashing
.
murmur3_128
();
GroupByQuery
.
Builder
builder
=
new
GroupByQuery
.
Builder
()
.
setDataSource
(
DATA_SOURCE
)
.
setQuerySegmentSpec
(
SEG_SPEC
)
.
setDimFilter
(
DIM_FILTER
)
.
setGranularity
(
GRANULARITY
)
.
setDimensions
(
Arrays
.<
DimensionSpec
>
asList
(
new
DefaultDimensionSpec
(
"a"
,
"a"
)))
.
setAggregatorSpecs
(
AGGS
)
.
setAggregatorSpecs
(
aggsWithUniques
)
.
setPostAggregatorSpecs
(
POST_AGGS
)
.
setContext
(
CONTEXT
);
final
HyperLogLogCollector
collector
=
HyperLogLogCollector
.
makeLatestCollector
();
collector
.
add
(
hashFn
.
hashString
(
"abc123"
,
Charsets
.
UTF_8
).
asBytes
());
collector
.
add
(
hashFn
.
hashString
(
"123abc"
,
Charsets
.
UTF_8
).
asBytes
());
testQueryCaching
(
client
,
builder
.
build
(),
new
Interval
(
"2011-01-01/2011-01-02"
),
makeGroupByResults
(
new
DateTime
(
"2011-01-01"
),
ImmutableMap
.
of
(
"a"
,
"a"
,
"rows"
,
1
,
"imps"
,
1
,
"impers"
,
1
)),
makeGroupByResults
(
new
DateTime
(
"2011-01-01"
),
ImmutableMap
.
of
(
"a"
,
"a"
,
"rows"
,
1
,
"imps"
,
1
,
"impers"
,
1
,
"uniques"
,
collector
)),
new
Interval
(
"2011-01-02/2011-01-03"
),
makeGroupByResults
(
new
DateTime
(
"2011-01-02"
),
ImmutableMap
.
of
(
"a"
,
"b"
,
"rows"
,
2
,
"imps"
,
2
,
"impers"
,
2
)),
makeGroupByResults
(
new
DateTime
(
"2011-01-02"
),
ImmutableMap
.
of
(
"a"
,
"b"
,
"rows"
,
2
,
"imps"
,
2
,
"impers"
,
2
,
"uniques"
,
collector
)),
new
Interval
(
"2011-01-05/2011-01-10"
),
makeGroupByResults
(
new
DateTime
(
"2011-01-05"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
),
new
DateTime
(
"2011-01-06"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
),
new
DateTime
(
"2011-01-07"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
),
new
DateTime
(
"2011-01-08"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
),
new
DateTime
(
"2011-01-09"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
)
new
DateTime
(
"2011-01-05"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-06"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-07"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-08"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-09"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
,
"uniques"
,
collector
)
),
new
Interval
(
"2011-01-05/2011-01-10"
),
makeGroupByResults
(
new
DateTime
(
"2011-01-05T01"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
),
new
DateTime
(
"2011-01-06T01"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
),
new
DateTime
(
"2011-01-07T01"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
),
new
DateTime
(
"2011-01-08T01"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
),
new
DateTime
(
"2011-01-09T01"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
)
new
DateTime
(
"2011-01-05T01"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-06T01"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-07T01"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-08T01"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-09T01"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
,
"uniques"
,
collector
)
)
);
...
...
@@ -874,16 +888,16 @@ public class CachingClusteredClientTest
);
TestHelper
.
assertExpectedObjects
(
makeGroupByResults
(
new
DateTime
(
"2011-01-05T"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
),
new
DateTime
(
"2011-01-05T01"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
),
new
DateTime
(
"2011-01-06T"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
),
new
DateTime
(
"2011-01-06T01"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
),
new
DateTime
(
"2011-01-07T"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
),
new
DateTime
(
"2011-01-07T01"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
),
new
DateTime
(
"2011-01-08T"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
),
new
DateTime
(
"2011-01-08T01"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
),
new
DateTime
(
"2011-01-09T"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
),
new
DateTime
(
"2011-01-09T01"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
)
new
DateTime
(
"2011-01-05T"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-05T01"
),
ImmutableMap
.
of
(
"a"
,
"c"
,
"rows"
,
3
,
"imps"
,
3
,
"impers"
,
3
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-06T"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-06T01"
),
ImmutableMap
.
of
(
"a"
,
"d"
,
"rows"
,
4
,
"imps"
,
4
,
"impers"
,
4
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-07T"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-07T01"
),
ImmutableMap
.
of
(
"a"
,
"e"
,
"rows"
,
5
,
"imps"
,
5
,
"impers"
,
5
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-08T"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-08T01"
),
ImmutableMap
.
of
(
"a"
,
"f"
,
"rows"
,
6
,
"imps"
,
6
,
"impers"
,
6
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-09T"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
,
"uniques"
,
collector
),
new
DateTime
(
"2011-01-09T01"
),
ImmutableMap
.
of
(
"a"
,
"g"
,
"rows"
,
7
,
"imps"
,
7
,
"impers"
,
7
,
"uniques"
,
collector
)
),
runner
.
run
(
builder
.
setInterval
(
"2011-01-05/2011-01-10"
)
...
...
编辑
预览
Markdown
is supported
0%
请重试
或
添加新附件
.
添加附件
取消
You are about to add
0
people
to the discussion. Proceed with caution.
先完成此消息的编辑!
取消
想要评论请
注册
或
登录