- 18 9月, 2018 2 次提交
-
-
由 Slim Bouguerra 提交于
* Adding licenses and enable apache-rat-plugi. Change-Id: I4685a2d9f1e147855dba69329b286f2d5bee3c18 * restore the copywrite of demo_table and add it to the list of allowed ones Change-Id: I2a9efde6f4b984bc1ac90483e90d98e71f818a14 * revirew comments Change-Id: I0256c930b7f9a5bb09b44b5e7a149e6ec48cb0ca * more fixup Change-Id: I1355e8a2549e76cd44487abec142be79bec59de2 * align Change-Id: I70bc47ecb577bdf6b91639dd91b6f5642aa6b02f
-
由 Hongze Zhang 提交于
-
- 15 9月, 2018 1 次提交
-
-
由 Roman Leventov 提交于
* Prohibit Random usage patterns * Fix FlattenJSONBenchmarkUtil
-
- 14 9月, 2018 3 次提交
-
-
由 QiuMM 提交于
* support specify list of task ports * fix typos * address comments * remove druid.indexer.runner.separateIngestionEndpoint config * tweak doc * fix doc * code cleanup * keep some useful comments
-
由 Roman Leventov 提交于
* Prohibit LinkedList * Fix tests * Fix * Remove unused import
-
由 Clint Wylie 提交于
* 'suspend' and 'resume' support for kafka indexing service changes: * introduces `SuspendableSupervisorSpec` interface to describe supervisors which support suspend/resume functionality controlled through the `SupervisorManager`, which will gracefully shutdown the supervisor and it's tasks, update it's `SupervisorSpec` with either a suspended or running state, and update with the toggled spec. Spec updates are provided by `SuspendableSupervisorSpec.createSuspendedSpec` and `SuspendableSupervisorSpec.createRunningSpec` respectively. * `KafkaSupervisorSpec` extends `SuspendableSupervisorSpec` and now supports suspend/resume functionality. The difference in behavior between 'running' and 'suspended' state is whether the supervisor will attempt to ensure that indexing tasks are or are not running respectively. Behavior is identical otherwise. * `SupervisorResource` now provides `/druid/indexer/v1/supervisor/{id}/suspend` and `/druid/indexer/v1/supervisor/{id}/resume` which are used to suspend/resume suspendable supervisors * Deprecated `/druid/indexer/v1/supervisor/{id}/shutdown` and moved it's functionality to `/druid/indexer/v1/supervisor/{id}/terminate` since 'shutdown' is ambiguous verbage for something that effectively stops a supervisor forever * Added ability to get all supervisor specs from `/druid/indexer/v1/supervisor` by supplying the 'full' query parameter `/druid/indexer/v1/supervisor?full` which will return a list of json objects of the form `{"id":<id>, "spec":<SupervisorSpec>}` * Updated overlord console ui to enable suspend/resume, and changed 'shutdown' to 'terminate' * move overlord console status to own column in supervisor table so does not look like garbage * spacing * padding * other kind of spacing * fix rebase fail * fix more better * all supervisors now suspendable, updated materialized view supervisor to support suspend, more tests * fix log
-
- 11 9月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Broker backpressure. Adds a new property "druid.broker.http.maxQueuedBytes" and a new context parameter "maxQueuedBytes". Both represent a maximum number of bytes queued per query before exerting backpressure on the channel to the data server. Fixes #4933. * Fix query context doc.
-
- 08 9月, 2018 1 次提交
-
-
由 Clint Wylie 提交于
Add support for 'maxTotalRows' to incremental publishing kafka indexing task and appenderator based realtime task (#6129) * resolves #5898 by adding maxTotalRows to incremental publishing kafka index task and appenderator based realtime indexing task, as available in IndexTask * address review comments * changes due to review * merge fail
-
- 31 8月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Rename io.druid to org.apache.druid. * Fix META-INF files and remove some benchmark results. * MonitorsConfig update for metrics package migration. * Reorder some dimensions in inner queries for some reason. * Fix protobuf tests.
-
- 27 8月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Fix all inspection errors currently reported. TeamCity builds on master are reporting inspection errors, possibly because there was a while where it was not running due to the Apache migration, and there was some drift. * Fix one more location. * Fix tests. * Another fix.
-
- 22 8月, 2018 1 次提交
-
-
由 Benedict Jin 提交于
* Make time-related variables more readable * Patch some improvements from the code reviewer * Remove unnecessary boxing of Long type variables
-
- 18 8月, 2018 2 次提交
-
-
由 Jihoon Son 提交于
* Fix NPE for taskGroupId * missing changes * fix wrong annotation * fix potential race * keep baseSequenceName * make deprecated old param
-
由 Samarth Jain 提交于
Composite request logger doesn't invoke @LifeCycleStart and @LifeCycleStop methods on its dependencies (#6173)
-
- 16 8月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Fix three bugs with segment publishing. 1. In AppenderatorImpl: always use a unique path if requested, even if the segment was already pushed. This is important because if we don't do this, it causes the issue mentioned in #6124. 2. In IndexerSQLMetadataStorageCoordinator: Fix a bug that could cause it to return a "not published" result instead of throwing an exception, when there was one metadata update failure, followed by some random exception. This is done by resetting the AtomicBoolean that tracks what case we're in, each time the callback runs. 3. In BaseAppenderatorDriver: Only kill segments if we get an affirmative false publish result. Skip killing if we just got some exception. The reason for this is that we want to avoid killing segments if they are in an unknown state. Two other changes to clarify the contracts a bit and hopefully prevent future bugs: 1. Return SegmentPublishResult from TransactionalSegmentPublisher, to make it more similar to announceHistoricalSegments. 2. Make it explicit, at multiple levels of javadocs, that a "false" publish result must indicate that the publish _definitely_ did not happen. Unknown states must be exceptions. This helps BaseAppenderatorDriver do the right thing. * Remove javadoc-only import. * Updates. * Fix test. * Fix tests.
-
- 11 8月, 2018 1 次提交
-
-
由 Jihoon Son 提交于
* Further optimize memory for Travis jobs * fix build * sudo false
-
- 10 8月, 2018 2 次提交
-
-
由 Christoph Hösler 提交于
* fix: stop druid on unhandled curator exceptions * catch exceptions when stopping lifecycle
-
由 Jihoon Son 提交于
* Add keepSegmentGranularity for compactionTask * fix build * createIoConfig method * fix build * fix build * address comments * fix build
-
- 08 8月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Cache: Add maxEntrySize config. The idea is this makes it more feasible to cache query types that can potentially generate large result sets, like groupBy and select, without fear of writing too much to the cache per query. Includes a refactor of cache population code in CachingQueryRunner and CachingClusteredClient, such that they now use the same CachePopulator interface with two implementations: one for foreground and one for background. The main reason for splitting the foreground / background impls is that the foreground impl can have a more effective implementation of maxEntrySize. It can stop retaining subvalues for the cache early. * Add CachePopulatorStats. * Fix whitespace. * Fix docs. * Fix various tests. * Add tests. * Fix tests. * Better tests * Remove conflict markers. * Fix licenses.
-
- 07 8月, 2018 1 次提交
-
-
由 Jihoon Son 提交于
* Native parallel indexing without shuffle * fix build * fix ci * fix ingestion without intervals * fix retry * fix retry * add it test * use chat handler * fix build * add docs * fix ITUnionQueryTest * fix failures * disable metrics reporting * working * Fix split of static-s3 firehose * Add endpoints to supervisor task and a unit test for endpoints * increase timeout in test * Added doc * Address comments * Fix overlapping locks * address comments * Fix static s3 firehose * Fix test * fix build * fix test * fix typo in docs * add missing maxBytesInMemory to doc * address comments * fix race in test * fix test * Rename to ParallelIndexSupervisorTask * fix teamcity * address comments * Fix license * addressing comments * addressing comments * indexTaskClient-based segmentAllocator instead of CountingActionBasedSegmentAllocator * Fix race in TaskMonitor and move HTTP endpoints to supervisorTask from runner * Add more javadocs * use StringUtils.nonStrictFormat for logging * fix typo and remove unused class * fix tests * change package * fix strict build * tmp * Fix overlord api according to the recent change in master * Fix it test
-
- 02 8月, 2018 2 次提交
-
-
由 Nishant Bangarwa 提交于
* Part 2 of changes for SQL Compatible Null Handling * Review comments - break lines longer than 120 characters * review comments * review comments * fix license * fix test failure * fix CalciteQueryTest failure * Null Handling - Review comments * review comments * review comments * fix checkstyle * fix checkstyle * remove unrelated change * fix test failure * fix failing test * fix travis failures * Make StringLast and StringFirst aggregators nullable and fix travis failures
-
由 Jonathan Wei 提交于
* Optimize per-segment queries * Always optimize, add unit test * PR comments * Only run IntervalDimFilter optimization on __time column * PR comments * Checkstyle fix * Add test for non __time column
-
- 01 8月, 2018 2 次提交
-
-
由 Clint Wylie 提交于
-
由 Roman Leventov 提交于
* Prohibit Lists.newArrayList() with a single argument * Test fixes * Add Javadoc to Node constructor
-
- 31 7月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
* Remove some unnecessary task storage internal APIs. - Remove MetadataStorageActionHandler's getInactiveStatusesSince and getActiveEntriesWithStatus. - Remove TaskStorage's getCreatedDateTimeAndDataSource. - Remove TaskStorageQueryAdapter's getCreatedTime, and getCreatedDateAndDataSource. - Migrated all callers to getActiveTaskInfo and getCompletedTaskInfo. This has one side effect: since getActiveTaskInfo (new) warns and continues when it sees unreadable tasks, but getActiveEntriesWithStatus threw an exception when it encountered those, it means that after this patch bad tasks will be ignored when syncing from metadata storage rather than causing an exception to be thrown. IMO, this is an improvement, since the most likely reason for bad tasks is either: - A new version introduced an additional validation, and a pre-existing task doesn't pass it. - You are rolling back from a newer version to an older version. In both cases, I believe you would want to skip tasks that can't be deserialized, rather than blocking overlord startup. * Remove unused import. * Fix formatting. * Fix formatting.
-
- 28 7月, 2018 2 次提交
-
-
由 Benedict Jin 提交于
* Various changes about druid-services module * Patch improvements from reviewer * Add ToArrayCallWithZeroLengthArrayArgument & ArraysAsListWithZeroOrOneArgument into inspection profile * Fix ArraysAsListWithZeroOrOneArgument * Fix conflict * Fix ToArrayCallWithZeroLengthArrayArgument * Fix AliEqualsAvoidNull * Remove blank line * Remove unused import clauses * Fix code style in TopNQueryRunnerTest * Fix conflict * Don't use Collections.singletonList when converting the type of array type * Add argLine into maven-surefire-plugin in druid-process module & increase the timeout value for testMoveSegment testcase * Roll back the latest commit * Add java.io.File#toURL() into druid-forbidden-apis * Using Boolean.parseBoolean instead of Boolean.valueOf for CliCoordinator#isOverlord * Add a new regexp element into stylecode xml file * Fix style error for new regexp * Set the level of ArraysAsListWithZeroOrOneArgument as WARNING * Fix style error for new regexp * Add option BY_LEVEL for ToArrayCallWithZeroLengthArrayArgument in inspection profile * Roll back the level as ToArrayCallWithZeroLengthArrayArgument as ERROR * Add toArray(new Object[0]) regexp into checkstyle config file & fix them * Set the level of ArraysAsListWithZeroOrOneArgument as ERROR & Roll back the level of ToArrayCallWithZeroLengthArrayArgument as WARNING until Youtrack fix it * Add a comment for string equals regexp in checkstyle config * Fix code format * Add RedundantTypeArguments as ERROR level inspection * Fix cannot resolve symbol datasource
-
由 Jihoon Son 提交于
-
- 27 7月, 2018 1 次提交
-
-
由 kaijianding 提交于
-
- 25 7月, 2018 2 次提交
-
-
由 Jihoon Son 提交于
Similar issue to https://github.com/apache/incubator-druid/issues/6028.
-
由 Roman Leventov 提交于
Synchronize scheduled poll() calls in SQLMetadataRuleManager to prevent flakiness in SqlMetadataRuleManagerTest (#6033)
-
- 20 7月, 2018 2 次提交
-
-
由 Surekha 提交于
* Add support to filter on datasource for active tasks * Added datasource filter to sql query for active tasks * Fixed unit tests * Address PR comments
-
由 Jihoon Son 提交于
-
- 14 7月, 2018 1 次提交
-
-
由 Jihoon Son 提交于
* Fix NPE while handling CheckpointNotice * fix code style * Fix test * fix test * add a log for creating a new taskGroup * fix backward compatibility in KafkaIOConfig
-
- 12 7月, 2018 3 次提交
-
-
由 Clint Wylie 提交于
* this will fix it * filter destinations to not consider servers already serving segment * fix it * cleanup * fix opposite day in ImmutableDruidServer.equals * simplify
-
由 Clint Wylie 提交于
* fix explosion in curator load queue peon caused by additional logging, as well as annoying chatty log * remove log message
-
由 Gian Merlino 提交于
* Update license headers. For compliance with http://www.apache.org/legal/src-headers.html. * More license adjustments. * Fix mistakenly edited package line.
-
- 11 7月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
False failures on Travis due to spurious timeout (in turn due to noisy neighbors) is a bigger problem than legitimate failures taking too long to time out. So it makes sense to extend timeouts.
-
- 06 7月, 2018 1 次提交
-
-
由 Gian Merlino 提交于
They are bad because datasources are used as paths on filesystems, and slashes invariably make things get stored improperly.
-
- 05 7月, 2018 3 次提交
-
-
由 Clint Wylie 提交于
change default compaction task target size from 800MB to 400MB to fall within range of what docs recommend for segment sizing (#5930)
-
由 Jihoon Son 提交于
-
由 Clint Wylie 提交于
* more coordinator logging to help give context to load queue peon log messages * fix style * more chill load queue peon log messages
-