1. 09 3月, 2016 20 次提交
    • D
      [SPARK-13702][CORE][SQL][MLLIB] Use diamond operator for generic instance creation in Java code. · c3689bc2
      Dongjoon Hyun 提交于
      ## What changes were proposed in this pull request?
      
      In order to make `docs/examples` (and other related code) more simple/readable/user-friendly, this PR replaces existing codes like the followings by using `diamond` operator.
      
      ```
      -    final ArrayList<Product2<Object, Object>> dataToWrite =
      -      new ArrayList<Product2<Object, Object>>();
      +    final ArrayList<Product2<Object, Object>> dataToWrite = new ArrayList<>();
      ```
      
      Java 7 or higher supports **diamond** operator which replaces the type arguments required to invoke the constructor of a generic class with an empty set of type parameters (<>). Currently, Spark Java code use mixed usage of this.
      
      ## How was this patch tested?
      
      Manual.
      Pass the existing tests.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11541 from dongjoon-hyun/SPARK-13702.
      c3689bc2
    • A
      [SPARK-13631][CORE] Thread-safe getLocationsWithLargestOutputs · cbff2803
      Andy Sloane 提交于
      ## What changes were proposed in this pull request?
      
      If a job is being scheduled in one thread which has a dependency on an
      RDD currently executing a shuffle in another thread, Spark would throw a
      NullPointerException. This patch synchronizes access to `mapStatuses` and
      skips null status entries (which are in-progress shuffle tasks).
      
      ## How was this patch tested?
      
      Our client code unit test suite, which was reliably reproducing the race
      condition with 10 threads, shows that this fixes it. I have not found a minimal
      test case to add to Spark, but I will attempt to do so if desired.
      
      The same test case was tripping up on SPARK-4454, which was fixed by
      making other DAGScheduler code thread-safe.
      
      shivaram srowen
      
      Author: Andy Sloane <asloane@tetrationanalytics.com>
      
      Closes #11505 from a1k0n/SPARK-13631.
      cbff2803
    • T
      [SPARK-13640][SQL] Synchronize ScalaReflection.mirror method. · 2c5af7d4
      Takuya UESHIN 提交于
      ## What changes were proposed in this pull request?
      
      `ScalaReflection.mirror` method should be synchronized when scala version is `2.10` because `universe.runtimeMirror` is not thread safe.
      
      ## How was this patch tested?
      
      I added a test to check thread safety of `ScalaRefection.mirror` method in `ScalaReflectionSuite`, which will throw the following Exception in Scala `2.10` without this patch:
      
      ```
      [info] - thread safety of mirror *** FAILED *** (49 milliseconds)
      [info]   java.lang.UnsupportedOperationException: tail of empty list
      [info]   at scala.collection.immutable.Nil$.tail(List.scala:339)
      [info]   at scala.collection.immutable.Nil$.tail(List.scala:334)
      [info]   at scala.reflect.internal.SymbolTable.popPhase(SymbolTable.scala:172)
      [info]   at scala.reflect.internal.Symbols$Symbol.unsafeTypeParams(Symbols.scala:1477)
      [info]   at scala.reflect.internal.Symbols$TypeSymbol.tpe(Symbols.scala:2777)
      [info]   at scala.reflect.internal.Mirrors$RootsBase.init(Mirrors.scala:235)
      [info]   at scala.reflect.runtime.JavaMirrors$class.createMirror(JavaMirrors.scala:34)
      [info]   at scala.reflect.runtime.JavaMirrors$class.runtimeMirror(JavaMirrors.scala:61)
      [info]   at scala.reflect.runtime.JavaUniverse.runtimeMirror(JavaUniverse.scala:12)
      [info]   at scala.reflect.runtime.JavaUniverse.runtimeMirror(JavaUniverse.scala:12)
      [info]   at org.apache.spark.sql.catalyst.ScalaReflection$.mirror(ScalaReflection.scala:36)
      [info]   at org.apache.spark.sql.catalyst.ScalaReflectionSuite$$anonfun$12$$anonfun$apply$mcV$sp$1$$anonfun$apply$1$$anonfun$apply$2.apply(ScalaReflectionSuite.scala:256)
      [info]   at org.apache.spark.sql.catalyst.ScalaReflectionSuite$$anonfun$12$$anonfun$apply$mcV$sp$1$$anonfun$apply$1$$anonfun$apply$2.apply(ScalaReflectionSuite.scala:252)
      [info]   at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
      [info]   at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
      [info]   at scala.concurrent.impl.ExecutionContextImpl$$anon$3.exec(ExecutionContextImpl.scala:107)
      [info]   at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
      [info]   at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
      [info]   at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
      [info]   at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
      ```
      
      Notice that the test will pass when Scala version is `2.11`.
      
      Author: Takuya UESHIN <ueshin@happy-camper.st>
      
      Closes #11487 from ueshin/issues/SPARK-13640.
      2c5af7d4
    • D
      [SPARK-13692][CORE][SQL] Fix trivial Coverity/Checkstyle defects · f3201aee
      Dongjoon Hyun 提交于
      ## What changes were proposed in this pull request?
      
      This issue fixes the following potential bugs and Java coding style detected by Coverity and Checkstyle.
      
      - Implement both null and type checking in equals functions.
      - Fix wrong type casting logic in SimpleJavaBean2.equals.
      - Add `implement Cloneable` to `UTF8String` and `SortedIterator`.
      - Remove dereferencing before null check in `AbstractBytesToBytesMapSuite`.
      - Fix coding style: Add '{}' to single `for` statement in mllib examples.
      - Remove unused imports in `ColumnarBatch` and `JavaKinesisStreamSuite`.
      - Remove unused fields in `ChunkFetchIntegrationSuite`.
      - Add `stop()` to prevent resource leak.
      
      Please note that the last two checkstyle errors exist on newly added commits after [SPARK-13583](https://issues.apache.org/jira/browse/SPARK-13583).
      
      ## How was this patch tested?
      
      manual via `./dev/lint-java` and Coverity site.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11530 from dongjoon-hyun/SPARK-13692.
      f3201aee
    • J
      [SPARK-7286][SQL] Deprecate !== in favour of =!= · 035d3acd
      Jakob Odersky 提交于
      This PR replaces #9925 which had issues with CI. **Please see the original PR for any previous discussions.**
      
      ## What changes were proposed in this pull request?
      Deprecate the SparkSQL column operator !== and use =!= as an alternative.
      Fixes subtle issues related to operator precedence (basically, !== does not have the same priority as its logical negation, ===).
      
      ## How was this patch tested?
      All currently existing tests.
      
      Author: Jakob Odersky <jodersky@gmail.com>
      
      Closes #11588 from jodersky/SPARK-7286.
      035d3acd
    • H
      [SPARK-13754] Keep old data source name for backwards compatibility · cc4ab37e
      Hossein 提交于
      ## Motivation
      CSV data source was contributed by Databricks. It is the inlined version of https://github.com/databricks/spark-csv. The data source name was `com.databricks.spark.csv`. As a result there are many tables created on older versions of spark with that name as the source. For backwards compatibility we should keep the old name.
      
      ## Proposed changes
      `com.databricks.spark.csv` was added to list of `backwardCompatibilityMap` in `ResolvedDataSource.scala`
      
      ## Tests
      A unit test was added to `CSVSuite` to parse a csv file using the old name.
      
      Author: Hossein <hossein@databricks.com>
      
      Closes #11589 from falaki/SPARK-13754.
      cc4ab37e
    • D
      [SPARK-13750][SQL] fix sizeInBytes of HadoopFsRelation · 982ef2b8
      Davies Liu 提交于
      ## What changes were proposed in this pull request?
      
      This PR fix the sizeInBytes of HadoopFsRelation.
      
      ## How was this patch tested?
      
      Added regression test for that.
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #11590 from davies/fix_sizeInBytes.
      982ef2b8
    • B
      [SPARK-13625][PYSPARK][ML] Added a check to see if an attribute is a property... · d8813fa0
      Bryan Cutler 提交于
      [SPARK-13625][PYSPARK][ML] Added a check to see if an attribute is a property when getting param list
      
      ## What changes were proposed in this pull request?
      
      Added a check in pyspark.ml.param.Param.params() to see if an attribute is a property (decorated with `property`) before checking if it is a `Param` instance.  This prevents the property from being invoked to 'get' this attribute, which could possibly cause an error.
      
      ## How was this patch tested?
      
      Added a test case with a class has a property that will raise an error when invoked and then call`Param.params` to verify that the property is not invoked, but still able to find another property in the class.  Also ran pyspark-ml test before fix that will trigger an error, and again after the fix to verify that the error was resolved and the method was working properly.
      
      Author: Bryan Cutler <cutlerb@gmail.com>
      
      Closes #11476 from BryanCutler/pyspark-ml-property-attr-SPARK-13625.
      d8813fa0
    • J
      [SPARK-13755] Escape quotes in SQL plan visualization node labels · 81f54acc
      Josh Rosen 提交于
      When generating Graphviz DOT files in the SQL query visualization we need to escape double-quotes inside node labels. This is a followup to #11309, which fixed a similar graph in Spark Core's DAG visualization.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11587 from JoshRosen/graphviz-escaping.
      81f54acc
    • S
      [SPARK-13668][SQL] Reorder filter/join predicates to short-circuit isNotNull checks · e430614e
      Sameer Agarwal 提交于
      ## What changes were proposed in this pull request?
      
      If a filter predicate or a join condition consists of `IsNotNull` checks, we should reorder these checks such that these non-nullability checks are evaluated before the rest of the predicates.
      
      For e.g., if a filter predicate is of the form `a > 5 && isNotNull(b)`, we should rewrite this as `isNotNull(b) && a > 5` during physical plan generation.
      
      ## How was this patch tested?
      
      new unit tests that verify the physical plan for both filters and joins in `ReorderedPredicateSuite`
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #11511 from sameeragarwal/reorder-isnotnull.
      e430614e
    • M
      [SPARK-13738][SQL] Cleanup Data Source resolution · 1e288405
      Michael Armbrust 提交于
      Follow-up to #11509, that simply refactors the interface that we use when resolving a pluggable `DataSource`.
       - Multiple functions share the same set of arguments so we make this a case class, called `DataSource`.  Actual resolution is now done by calling a function on this class.
       - Instead of having multiple methods named `apply` (some of which do writing some of which do reading) we now explicitly have `resolveRelation()` and `write(mode, df)`.
       - Get rid of `Array[String]` since this is an internal API and was forcing us to awkwardly call `toArray` in a bunch of places.
      
      Author: Michael Armbrust <michael@databricks.com>
      
      Closes #11572 from marmbrus/dataSourceResolution.
      1e288405
    • D
      [SPARK-13400] Stop using deprecated Octal escape literals · 076009b9
      Dongjoon Hyun 提交于
      ## What changes were proposed in this pull request?
      
      This removes the remaining deprecated Octal escape literals. The followings are the warnings on those two lines.
      ```
      LiteralExpressionSuite.scala:99: Octal escape literals are deprecated, use \u0000 instead.
      HiveQlSuite.scala:74: Octal escape literals are deprecated, use \u002c instead.
      ```
      
      ## How was this patch tested?
      
      Manual.
      During building, there should be no warning on `Octal escape literals`.
      ```
      mvn -DskipTests clean install
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11584 from dongjoon-hyun/SPARK-13400.
      076009b9
    • W
      [SPARK-13593] [SQL] improve the `createDataFrame` to accept data type string and verify the data · d57daf1f
      Wenchen Fan 提交于
      ## What changes were proposed in this pull request?
      
      This PR improves the `createDataFrame` method to make it also accept datatype string, then users can convert python RDD to DataFrame easily, for example, `df = rdd.toDF("a: int, b: string")`.
      It also supports flat schema so users can convert an RDD of int to DataFrame directly, we will automatically wrap int to row for users.
      If schema is given, now we checks if the real data matches the given schema, and throw error if it doesn't.
      
      ## How was this patch tested?
      
      new tests in `test.py` and doc test in `types.py`
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11444 from cloud-fan/pyrdd.
      d57daf1f
    • W
      [SPARK-13740][SQL] add null check for _verify_type in types.py · d5ce6172
      Wenchen Fan 提交于
      ## What changes were proposed in this pull request?
      
      This PR adds null check in `_verify_type` according to the nullability information.
      
      ## How was this patch tested?
      
      new doc tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11574 from cloud-fan/py-null-check.
      d5ce6172
    • Y
      [ML] testEstimatorAndModelReadWrite should call checkModelData · 9740954f
      Yanbo Liang 提交于
      ## What changes were proposed in this pull request?
      Although we defined ```checkModelData``` in [```read/write``` test](https://github.com/apache/spark/blob/master/mllib/src/test/scala/org/apache/spark/ml/regression/LinearRegressionSuite.scala#L994) of ML estimators/models and pass it to ```testEstimatorAndModelReadWrite```, ```testEstimatorAndModelReadWrite``` omits to call ```checkModelData``` to check the equality of model data. So actually we did not run the check of model data equality for all test cases currently, we should fix it.
      BTW, fix the bug of LDA read/write test which did not set ```docConcentration```. This bug should have failed test, but it does not complain because we did not run ```checkModelData``` actually.
      cc jkbradley mengxr
      ## How was this patch tested?
      No new unit test, should pass the exist ones.
      
      Author: Yanbo Liang <ybliang8@gmail.com>
      
      Closes #11513 from yanboliang/ml-check-model-data.
      9740954f
    • W
      [SPARK-12727][SQL] support SQL generation for aggregate with multi-distinct · 46881b4e
      Wenchen Fan 提交于
      ## What changes were proposed in this pull request?
      
      This PR add SQL generation support for aggregate with multi-distinct, by simply moving the `DistinctAggregationRewriter` rule to optimizer.
      
      More discussions are needed as this breaks an import contract: analyzed plan should be able to run without optimization.  However, the `ComputeCurrentTime` rule has kind of broken it already, and I think maybe we should add a new phase for this kind of rules, because strictly speaking they don't belong to analysis and is coupled with the physical plan implementation.
      
      ## How was this patch tested?
      
      existing tests
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11579 from cloud-fan/distinct.
      46881b4e
    • J
      [SPARK-13695] Don't cache MEMORY_AND_DISK blocks as bytes in memory after spills · ad3c9a97
      Josh Rosen 提交于
      When a cached block is spilled to disk and read back in serialized form (i.e. as bytes), the current BlockManager implementation will attempt to re-insert the serialized block into the MemoryStore even if the block's storage level requests deserialized caching.
      
      This behavior adds some complexity to the MemoryStore but I don't think it offers many performance benefits and I'd like to remove it in order to simplify a larger refactoring patch. Therefore, this patch changes the behavior so that disk store reads will only cache bytes in the memory store for blocks with serialized storage levels.
      
      There are two places where we request serialized bytes from the BlockStore:
      
      1. getLocalBytes(), which is only called when reading local copies of TorrentBroadcast pieces. Broadcast pieces are always cached using a serialized storage level, so this won't lead to a mismatch in serialization forms if spilled bytes read from disk are cached as bytes in the memory store.
      2. the non-shuffle-block branch in getBlockData(), which is only called by the NettyBlockRpcServer when responding to requests to read remote blocks. Caching the serialized bytes in memory will only benefit us if those cached bytes are read before they're evicted and the likelihood of that happening seems low since the frequency of remote reads of non-broadcast cached blocks seems very low. Caching these bytes when they have a low probability of being read is bad if it risks the eviction of blocks which are cached in their expected serialized/deserialized forms, since those blocks seem more likely to be read in local computation.
      
      Given the argument above, I think this change is unlikely to cause performance regressions.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11533 from JoshRosen/remove-memorystore-level-mismatch.
      ad3c9a97
    • D
      [SPARK-13657] [SQL] Support parsing very long AND/OR expressions · 78d3b605
      Davies Liu 提交于
      ## What changes were proposed in this pull request?
      
      In order to avoid StackOverflow when parse a expression with hundreds of ORs, we should use loop instead of recursive functions to flatten the tree as list. This PR also build a balanced tree to reduce the depth of generated And/Or expression, to avoid StackOverflow in analyzer/optimizer.
      
      ## How was this patch tested?
      
      Add new unit tests. Manually tested with TPCDS Q3 with hundreds predicates in it [1]. These predicates help to reduce the number of partitions, then the query time went from 60 seconds to 8 seconds.
      
      [1] https://github.com/cloudera/impala-tpcds-kit/blob/master/queries/q3.sql
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #11501 from davies/long_or.
      78d3b605
    • S
      [SPARK-13715][MLLIB] Remove last usages of jblas in tests · 54040f8d
      Sean Owen 提交于
      ## What changes were proposed in this pull request?
      
      Remove last usage of jblas, in tests
      
      ## How was this patch tested?
      
      Jenkins tests -- the same ones that are being modified.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #11560 from srowen/SPARK-13715.
      54040f8d
    • J
      [HOTFIX][YARN] Fix yarn cluster mode fire and forget regression · ca1a7b9d
      jerryshao 提交于
      ## What changes were proposed in this pull request?
      
      Fire and forget is disabled by default, with this patch #10205 it is enabled by default, so this is a regression should be fixed.
      
      ## How was this patch tested?
      
      Manually verified this change.
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #11577 from jerryshao/hot-fix-yarn-cluster.
      ca1a7b9d
  2. 08 3月, 2016 20 次提交
    • W
      [SPARK-13637][SQL] use more information to simplify the code in Expand builder · 7d05d02b
      Wenchen Fan 提交于
      ## What changes were proposed in this pull request?
      
      The code in `Expand.apply` can be simplified by existing information:
      
      * the `groupByExprs` parameter are all `Attribute`s
      * the `child` parameter is a `Project` that append aliased group by expressions to its child's output
      
      ## How was this patch tested?
      
      by existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11485 from cloud-fan/expand.
      7d05d02b
    • J
      [SPARK-13675][UI] Fix wrong historyserver url link for application running in yarn cluster mode · 9e86e6ef
      jerryshao 提交于
      ## What changes were proposed in this pull request?
      
      Current URL for each application to access history UI is like:
      http://localhost:18080/history/application_1457058760338_0016/1/jobs/ or http://localhost:18080/history/application_1457058760338_0016/2/jobs/
      
      Here **1** or **2** represents the number of attempts in `historypage.js`, but it will parse to attempt id in `HistoryServer`, while the correct attempt id should be like "appattempt_1457058760338_0016_000002", so it will fail to parse to a correct attempt id in HistoryServer.
      
      This is OK in yarn client mode, since we don't need this attempt id to fetch out the app cache, but it is failed in yarn cluster mode, where attempt id "1" or "2" is actually wrong.
      
      So here we should fix this url to parse the correct application id and attempt id. Also the suffix "jobs/" is not needed.
      
      Here is the screenshot:
      
      ![screen shot 2016-02-29 at 3 57 32 pm](https://cloud.githubusercontent.com/assets/850797/13524377/d4b44348-e235-11e5-8b3e-bc06de306e87.png)
      
      ## How was this patch tested?
      
      This patch is tested manually, with different master and deploy mode.
      
      ![image](https://cloud.githubusercontent.com/assets/850797/13524419/118be5a0-e236-11e5-8022-3ff613ccde46.png)
      
      Author: jerryshao <sshao@hortonworks.com>
      
      Closes #11518 from jerryshao/SPARK-13675.
      9e86e6ef
    • D
      [SPARK-13117][WEB UI] WebUI should use the local ip not 0.0.0.0 · 9bf76ddd
      Devaraj K 提交于
      ## What changes were proposed in this pull request?
      
      In WebUI, now Jetty Server starts with SPARK_LOCAL_IP config value if it
      is configured otherwise it starts with default value as '0.0.0.0'.
      
      It is continuation as per the closed PR https://github.com/apache/spark/pull/11133 for the JIRA SPARK-13117 and discussion in SPARK-13117.
      
      ## How was this patch tested?
      
      This has been verified using the command 'netstat -tnlp | grep <PID>' to check on which IP/hostname is binding with the below steps.
      
      In the below results, mentioned PID in the command is the corresponding process id.
      
      #### Without the patch changes,
      Web UI(Jetty Server) is not taking the value configured for SPARK_LOCAL_IP and it is listening to all the interfaces.
      ###### Master
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 3930
      tcp6       0      0 :::8080                 :::*                    LISTEN      3930/java
      ```
      
      ###### Worker
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 4090
      tcp6       0      0 :::8081                 :::*                    LISTEN      4090/java
      ```
      
      ###### History Server Process,
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 2471
      tcp6       0      0 :::18080                :::*                    LISTEN      2471/java
      ```
      ###### Driver
      ```
      [devarajstobdtserver2 spark-master]$ netstat -tnlp | grep 6556
      tcp6       0      0 :::4040                 :::*                    LISTEN      6556/java
      ```
      
      #### With the patch changes
      
      ##### i. With SPARK_LOCAL_IP configured
      If the SPARK_LOCAL_IP is configured then all the processes Web UI(Jetty Server) is getting bind to the configured value.
      ###### Master
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 1561
      tcp6       0      0 x.x.x.x:8080       :::*                    LISTEN      1561/java
      ```
      ###### Worker
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 2229
      tcp6       0      0 x.x.x.x:8081       :::*                    LISTEN      2229/java
      ```
      ###### History Server
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 3747
      tcp6       0      0 x.x.x.x:18080      :::*                    LISTEN      3747/java
      ```
      ###### Driver
      ```
      [devarajstobdtserver2 spark-master]$ netstat -tnlp | grep 6013
      tcp6       0      0 x.x.x.x:4040       :::*                    LISTEN      6013/java
      ```
      
      ##### ii. Without SPARK_LOCAL_IP configured
      If the SPARK_LOCAL_IP is not configured then all the processes Web UI(Jetty Server) will start with the '0.0.0.0' as default value.
      ###### Master
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 4573
      tcp6       0      0 :::8080                 :::*                    LISTEN      4573/java
      ```
      
      ###### Worker
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 4703
      tcp6       0      0 :::8081                 :::*                    LISTEN      4703/java
      ```
      
      ###### History Server
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 4846
      tcp6       0      0 :::18080                :::*                    LISTEN      4846/java
      ```
      
      ###### Driver
      ```
      [devarajstobdtserver2 sbin]$ netstat -tnlp | grep 5437
      tcp6       0      0 :::4040                 :::*                    LISTEN      5437/java
      ```
      
      Author: Devaraj K <devaraj@apache.org>
      
      Closes #11490 from devaraj-kavali/SPARK-13117-v1.
      9bf76ddd
    • D
      [HOT-FIX][BUILD] Use the new location of `checkstyle-suppressions.xml` · 7771c731
      Dongjoon Hyun 提交于
      ## What changes were proposed in this pull request?
      
      This PR fixes `dev/lint-java` and `mvn checkstyle:check` failures due the recent file location change.
      The following is the error message of current master.
      ```
      Checkstyle checks failed at following occurrences:
      [ERROR] Failed to execute goal org.apache.maven.plugins:maven-checkstyle-plugin:2.17:check (default-cli) on project spark-parent_2.11: Failed during checkstyle configuration: cannot initialize module SuppressionFilter - Cannot set property 'file' to 'checkstyle-suppressions.xml' in module SuppressionFilter: InvocationTargetException: Unable to find: checkstyle-suppressions.xml -> [Help 1]
      ```
      
      ## How was this patch tested?
      
      Manual. The following command should run correctly.
      ```
      ./dev/lint-java
      mvn checkstyle:check
      ```
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11567 from dongjoon-hyun/hotfix_checkstyle_suppression.
      7771c731
    • J
      [SPARK-13659] Refactor BlockStore put*() APIs to remove returnValues · e52e597d
      Josh Rosen 提交于
      In preparation for larger refactoring, this patch removes the confusing `returnValues` option from the BlockStore put() APIs: returning the value is only useful in one place (caching) and in other situations, such as block replication, it's simpler to put() and then get().
      
      As part of this change, I needed to refactor `BlockManager.doPut()`'s block replication code. I also changed `doPut()` to access the memory and disk stores directly rather than calling them through the BlockStore interface; this is in anticipation of a followup patch to remove the BlockStore interface so that the disk store can expose a binary-data-oriented API which is not concerned with Java objects or serialization.
      
      These changes should be covered by the existing storage unit tests. The best way to review this patch is probably to look at the individual commits, all of which are small and have useful descriptions to guide the review.
      
      /cc davies for review.
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11502 from JoshRosen/remove-returnvalues.
      e52e597d
    • S
      [SPARK-13711][CORE] Don't call SparkUncaughtExceptionHandler in AppClient as it's in driver · 017cdf2b
      Shixiong Zhu 提交于
      ## What changes were proposed in this pull request?
      
      AppClient runs in the driver side. It should not call `Utils.tryOrExit` as it will send exception to SparkUncaughtExceptionHandler and call `System.exit`. This PR just removed `Utils.tryOrExit`.
      
      ## How was this patch tested?
      
      manual tests.
      
      Author: Shixiong Zhu <shixiong@databricks.com>
      
      Closes #11566 from zsxwing/SPARK-13711.
      017cdf2b
    • D
      [SPARK-13404] [SQL] Create variables for input row when it's actually used · 25bba58d
      Davies Liu 提交于
      ## What changes were proposed in this pull request?
      
      This PR change the way how we generate the code for the output variables passing from a plan to it's parent.
      
      Right now, they are generated before call consume() of it's parent. It's not efficient, if the parent is a Filter or Join, which could filter out most the rows, the time to access some of the columns that are not used by the Filter or Join are wasted.
      
      This PR try to improve this by defering the access of columns until they are actually used by a plan. After this PR, a plan does not need to generate code to evaluate the variables for output, just passing the ExprCode to its parent by `consume()`. In `parent.consumeChild()`, it will check the output from child and `usedInputs`, generate the code for those columns that is part of `usedInputs` before calling `doConsume()`.
      
      This PR also change the `if` from
      ```
      if (cond) {
        xxx
      }
      ```
      to
      ```
      if (!cond) continue;
      xxx
      ```
      The new one could help to reduce the nested indents for multiple levels of Filter and BroadcastHashJoin.
      
      It also added some comments for operators.
      
      ## How was the this patch tested?
      
      Unit tests. Manually ran TPCDS Q55, this PR improve the performance about 30% (scale=10, from 2.56s to 1.96s)
      
      Author: Davies Liu <davies@databricks.com>
      
      Closes #11274 from davies/gen_defer.
      25bba58d
    • A
      [SPARK-13689][SQL] Move helper things in CatalystQl to new utils object · da7bfac4
      Andrew Or 提交于
      ## What changes were proposed in this pull request?
      
      When we add more DDL parsing logic in the future, SparkQl will become very big. To keep it smaller, we'll introduce helper "parser objects", e.g. one to parse alter table commands. However, these parser objects will need to access some helper methods that exist in CatalystQl. The proposal is to move those methods to an isolated ParserUtils object.
      
      This is based on viirya's changes in #11048. It prefaces the bigger fix for SPARK-13139 to make the diff of that patch smaller.
      
      ## How was this patch tested?
      
      No change in functionality, so just Jenkins.
      
      Author: Andrew Or <andrew@databricks.com>
      
      Closes #11529 from andrewor14/parser-utils.
      da7bfac4
    • T
      [SPARK-13648] Add Hive Cli to classes for isolated classloader · 46f25c24
      Tim Preece 提交于
      ## What changes were proposed in this pull request?
      
      Adding the hive-cli classes to the classloader
      
      ## How was this patch tested?
      
      The hive Versionssuite tests were run
      
      This is my original work and I license the work to the project under the project's open source license.
      
      Author: Tim Preece <tim.preece.in.oz@gmail.com>
      
      Closes #11495 from preecet/master.
      46f25c24
    • M
      [SPARK-13665][SQL] Separate the concerns of HadoopFsRelation · e720dda4
      Michael Armbrust 提交于
      `HadoopFsRelation` is used for reading most files into Spark SQL.  However today this class mixes the concerns of file management, schema reconciliation, scan building, bucketing, partitioning, and writing data.  As a result, many data sources are forced to reimplement the same functionality and the various layers have accumulated a fair bit of inefficiency.  This PR is a first cut at separating this into several components / interfaces that are each described below.  Additionally, all implementations inside of Spark (parquet, csv, json, text, orc, svmlib) have been ported to the new API `FileFormat`.  External libraries, such as spark-avro will also need to be ported to work with Spark 2.0.
      
      ### HadoopFsRelation
      A simple `case class` that acts as a container for all of the metadata required to read from a datasource.  All discovery, resolution and merging logic for schemas and partitions has been removed.  This an internal representation that no longer needs to be exposed to developers.
      
      ```scala
      case class HadoopFsRelation(
          sqlContext: SQLContext,
          location: FileCatalog,
          partitionSchema: StructType,
          dataSchema: StructType,
          bucketSpec: Option[BucketSpec],
          fileFormat: FileFormat,
          options: Map[String, String]) extends BaseRelation
      ```
      
      ### FileFormat
      The primary interface that will be implemented by each different format including external libraries.  Implementors are responsible for reading a given format and converting it into `InternalRow` as well as writing out an `InternalRow`.  A format can optionally return a schema that is inferred from a set of files.
      
      ```scala
      trait FileFormat {
        def inferSchema(
            sqlContext: SQLContext,
            options: Map[String, String],
            files: Seq[FileStatus]): Option[StructType]
      
        def prepareWrite(
            sqlContext: SQLContext,
            job: Job,
            options: Map[String, String],
            dataSchema: StructType): OutputWriterFactory
      
        def buildInternalScan(
            sqlContext: SQLContext,
            dataSchema: StructType,
            requiredColumns: Array[String],
            filters: Array[Filter],
            bucketSet: Option[BitSet],
            inputFiles: Array[FileStatus],
            broadcastedConf: Broadcast[SerializableConfiguration],
            options: Map[String, String]): RDD[InternalRow]
      }
      ```
      
      The current interface is based on what was required to get all the tests passing again, but still mixes a couple of concerns (i.e. `bucketSet` is passed down to the scan instead of being resolved by the planner).  Additionally, scans are still returning `RDD`s instead of iterators for single files.  In a future PR, bucketing should be removed from this interface and the scan should be isolated to a single file.
      
      ### FileCatalog
      This interface is used to list the files that make up a given relation, as well as handle directory based partitioning.
      
      ```scala
      trait FileCatalog {
        def paths: Seq[Path]
        def partitionSpec(schema: Option[StructType]): PartitionSpec
        def allFiles(): Seq[FileStatus]
        def getStatus(path: Path): Array[FileStatus]
        def refresh(): Unit
      }
      ```
      
      Currently there are two implementations:
       - `HDFSFileCatalog` - based on code from the old `HadoopFsRelation`.  Infers partitioning by recursive listing and caches this data for performance
       - `HiveFileCatalog` - based on the above, but it uses the partition spec from the Hive Metastore.
      
      ### ResolvedDataSource
      Produces a logical plan given the following description of a Data Source (which can come from DataFrameReader or a metastore):
       - `paths: Seq[String] = Nil`
       - `userSpecifiedSchema: Option[StructType] = None`
       - `partitionColumns: Array[String] = Array.empty`
       - `bucketSpec: Option[BucketSpec] = None`
       - `provider: String`
       - `options: Map[String, String]`
      
      This class is responsible for deciding which of the Data Source APIs a given provider is using (including the non-file based ones).  All reconciliation of partitions, buckets, schema from metastores or inference is done here.
      
      ### DataSourceAnalysis / DataSourceStrategy
      Responsible for analyzing and planning reading/writing of data using any of the Data Source APIs, including:
       - pruning the files from partitions that will be read based on filters.
       - appending partition columns*
       - applying additional filters when a data source can not evaluate them internally.
       - constructing an RDD that is bucketed correctly when required*
       - sanity checking schema match-up and other analysis when writing.
      
      *In the future we should do that following:
       - Break out file handling into its own Strategy as its sufficiently complex / isolated.
       - Push the appending of partition columns down in to `FileFormat` to avoid an extra copy / unvectorization.
       - Use a custom RDD for scans instead of `SQLNewNewHadoopRDD2`
      
      Author: Michael Armbrust <michael@databricks.com>
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11509 from marmbrus/fileDataSource.
      e720dda4
    • S
      [SPARK-13596][BUILD] Move misc top-level build files into appropriate subdirs · 0eea12a3
      Sean Owen 提交于
      ## What changes were proposed in this pull request?
      
      Move many top-level files in dev/ or other appropriate directory. In particular, put `make-distribution.sh` in `dev` and update docs accordingly. Remove deprecated `sbt/sbt`.
      
      I was (so far) unable to figure out how to move `tox.ini`. `scalastyle-config.xml` should be movable but edits to the project `.sbt` files didn't work; config file location is updatable for compile but not test scope.
      
      ## How was this patch tested?
      
      `./dev/run-tests` to verify RAT and checkstyle work. Jenkins tests for the rest.
      
      Author: Sean Owen <sowen@cloudera.com>
      
      Closes #11522 from srowen/SPARK-13596.
      0eea12a3
    • H
      [SPARK-13442][SQL] Make type inference recognize boolean types · 8577260a
      hyukjinkwon 提交于
      ## What changes were proposed in this pull request?
      
      https://issues.apache.org/jira/browse/SPARK-13442
      
      This PR adds the support for inferring `BooleanType` for schema.
      It supports to infer case-insensitive `true` / `false` as `BooleanType`.
      
      Unittests were added for `CSVInferSchemaSuite` and `CSVSuite` for end-to-end test.
      
      ## How was the this patch tested?
      
      This was tested with unittests and with `dev/run_tests` for coding style
      
      Author: hyukjinkwon <gurwls223@gmail.com>
      
      Closes #11315 from HyukjinKwon/SPARK-13442.
      8577260a
    • M
      [SPARK-529][CORE][YARN] Add type-safe config keys to SparkConf. · e1fb8579
      Marcelo Vanzin 提交于
      This is, in a way, the basics to enable SPARK-529 (which was closed as
      won't fix but I think is still valuable). In fact, Spark SQL created
      something for that, and this change basically factors out that code
      and inserts it into SparkConf, with some extra bells and whistles.
      
      To showcase the usage of this pattern, I modified the YARN backend
      to use the new config keys (defined in the new `config` package object
      under `o.a.s.deploy.yarn`). Most of the changes are mechanic, although
      logic had to be slightly modified in a handful of places.
      
      Author: Marcelo Vanzin <vanzin@cloudera.com>
      
      Closes #10205 from vanzin/conf-opts.
      e1fb8579
    • J
      [SPARK-13655] Improve isolation between tests in KinesisBackedBlockRDDSuite · e9e67b39
      Josh Rosen 提交于
      This patch modifies `KinesisBackedBlockRDDTests` to increase the isolation between tests in order to fix a bug which causes the tests to hang.
      
      See #11558 for more details.
      
      /cc zsxwing srowen
      
      Author: Josh Rosen <joshrosen@databricks.com>
      
      Closes #11564 from JoshRosen/SPARK-13655.
      e9e67b39
    • G
      [SPARK-13722][SQL] No Push Down for Non-deterministics Predicates through Generate · b6071a70
      gatorsmile 提交于
      #### What changes were proposed in this pull request?
      
      Non-deterministic predicates should not be pushed through Generate.
      
      #### How was this patch tested?
      
      Added a test case in `FilterPushdownSuite.scala`
      
      Author: gatorsmile <gatorsmile@gmail.com>
      
      Closes #11562 from gatorsmile/pushPredicateDownWindow.
      b6071a70
    • C
      [MINOR][DOC] improve the doc for "spark.memory.offHeap.size" · a3ec50a4
      CodingCat 提交于
      The description of "spark.memory.offHeap.size" in the current document does not clearly state that memory is counted with bytes....
      
      This PR contains a small fix for this tiny issue
      
      document fix
      
      Author: CodingCat <zhunansjtu@gmail.com>
      
      Closes #11561 from CodingCat/master.
      a3ec50a4
    • D
      [SPARK-12243][BUILD][PYTHON] PySpark tests are slow in Jenkins. · e72914f3
      Dongjoon Hyun 提交于
      ## What changes were proposed in this pull request?
      
      In the Jenkins pull request builder, PySpark tests take around [962 seconds ](https://amplab.cs.berkeley.edu/jenkins/job/SparkPullRequestBuilder/52530/console) of end-to-end time to run, despite the fact that we run four Python test suites in parallel. According to the log, the basic reason is that the long running test starts at the end due to FIFO queue. We first try to reduce the test time by just starting some long running tests first with simple priority queue.
      
      ```
      ========================================================================
      Running PySpark tests
      ========================================================================
      ...
      Finished test(python3.4): pyspark.streaming.tests (213s)
      Finished test(pypy): pyspark.sql.tests (92s)
      Finished test(pypy): pyspark.streaming.tests (280s)
      Tests passed in 962 seconds
      ```
      
      ## How was this patch tested?
      
      Manual check.
      Check 'Running PySpark tests' part of the Jenkins log.
      
      Author: Dongjoon Hyun <dongjoon@apache.org>
      
      Closes #11551 from dongjoon-hyun/SPARK-12243.
      e72914f3
    • S
      [SPARK-13495][SQL] Add Null Filters in the query plan for Filters/Joins based... · ef770031
      Sameer Agarwal 提交于
      [SPARK-13495][SQL] Add Null Filters in the query plan for Filters/Joins based on their data constraints
      
      ## What changes were proposed in this pull request?
      
      This PR adds an optimizer rule to eliminate reading (unnecessary) NULL values if they are not required for correctness by inserting `isNotNull` filters is the query plan. These filters are currently inserted beneath existing `Filter` and `Join` operators and are inferred based on their data constraints.
      
      Note: While this optimization is applicable to all types of join, it primarily benefits `Inner` and `LeftSemi` joins.
      
      ## How was this patch tested?
      
      1. Added a new `NullFilteringSuite` that tests for `IsNotNull` filters in the query plan for joins and filters. Also, tests interaction with the `CombineFilters` optimizer rules.
      2. Test generated ExpressionTrees via `OrcFilterSuite`
      3. Test filter source pushdown logic via `SimpleTextHadoopFsRelationSuite`
      
      cc yhuai nongli
      
      Author: Sameer Agarwal <sameer@databricks.com>
      
      Closes #11372 from sameeragarwal/gen-isnotnull.
      ef770031
    • W
      [SPARK-13694][SQL] QueryPlan.expressions should always include all expressions · 48964111
      Wenchen Fan 提交于
      ## What changes were proposed in this pull request?
      
      It's weird that expressions don't always have all the expressions in it. This PR marks `QueryPlan.expressions` final to forbid sub classes overriding it to exclude some expressions. Currently only `Generate` override it, we can use `producedAttributes` to fix the unresolved attribute problem for it.
      
      Note that this PR doesn't fix the problem in #11497
      
      ## How was this patch tested?
      
      existing tests.
      
      Author: Wenchen Fan <wenchen@databricks.com>
      
      Closes #11532 from cloud-fan/generate.
      48964111
    • D
      [SPARK-13651] Generator outputs are not resolved correctly resulting in run time error · d7eac9d7
      Dilip Biswal 提交于
      ## What changes were proposed in this pull request?
      
      ```
      Seq(("id1", "value1")).toDF("key", "value").registerTempTable("src")
      sqlContext.sql("SELECT t1.* FROM src LATERAL VIEW explode(map('key1', 100, 'key2', 200)) t1 AS key, value")
      ```
      Results in following logical plan
      
      ```
      Project [key#2,value#3]
      +- Generate explode(HiveGenericUDF#org.apache.hadoop.hive.ql.udf.generic.GenericUDFMap(key1,100,key2,200)), true, false, Some(genoutput), [key#2,value#3]
         +- SubqueryAlias src
            +- Project [_1#0 AS key#2,_2#1 AS value#3]
               +- LocalRelation [_1#0,_2#1], [[id1,value1]]
      ```
      
      The above query fails with following runtime error.
      ```
      java.lang.ClassCastException: java.lang.Integer cannot be cast to org.apache.spark.unsafe.types.UTF8String
      	at org.apache.spark.sql.catalyst.expressions.BaseGenericInternalRow$class.getUTF8String(rows.scala:46)
      	at org.apache.spark.sql.catalyst.expressions.GenericInternalRow.getUTF8String(rows.scala:221)
      	at org.apache.spark.sql.catalyst.expressions.GeneratedClass$SpecificUnsafeProjection.apply(generated.java:42)
      	at org.apache.spark.sql.execution.Generate$$anonfun$doExecute$1$$anonfun$apply$9.apply(Generate.scala:98)
      	at org.apache.spark.sql.execution.Generate$$anonfun$doExecute$1$$anonfun$apply$9.apply(Generate.scala:96)
      	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
      	at scala.collection.Iterator$$anon$11.next(Iterator.scala:370)
      	at scala.collection.Iterator$class.foreach(Iterator.scala:742)
      	at scala.collection.AbstractIterator.foreach(Iterator.scala:1194)
              <stack-trace omitted.....>
      ```
      In this case the generated outputs are wrongly resolved from its child (LocalRelation) due to
      https://github.com/apache/spark/blob/master/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/analysis/Analyzer.scala#L537-L548
      ## How was this patch tested?
      
      (Please explain how this patch was tested. E.g. unit tests, integration tests, manual tests)
      
      Added unit tests in hive/SQLQuerySuite and AnalysisSuite
      
      Author: Dilip Biswal <dbiswal@us.ibm.com>
      
      Closes #11497 from dilipbiswal/spark-13651.
      d7eac9d7