I am trying to HiveQL with my own custom serde (It worked properly with pure Hive). I followed the instruction in: https://cwiki.apache.org/confluence/display/Hive/Hive+on+Spark%3A+Getting+Started
But I am very confused about the part: Start Spark cluster (both standalone and Spark on YARN are supported). According to my understanding, we only need to Start Spark cluster if Spark is running in standalone mode. But I intend to run Spark on Yarn, is there need to start Spark cluster? What I did is: I just started Hadoop Yarn, and as I really do not know what to set to property spark.master, I just did not set it at all. Probably because of this setting, I got error message when running a Hive query, which uses my own Serde:
2015-10-05 20:42:07,184 INFO [main]: status.SparkJobMonitor (RemoteSparkJobMonitor.java:startMonitor(67)) - Job hasn't been submitted after 61s. Abor
ting it.
2015-10-05 20:42:07,184 ERROR [main]: status.SparkJobMonitor (SessionState.java:printError(960)) - Status: SENT
2015-10-05 20:42:07,184 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=SparkRunJob start=1444066866174 end=1444066927184 duration=61010 from=org.apache.hadoop.hive.ql.exec.spark.status.SparkJobMonitor>
2015-10-05 20:42:07,300 ERROR [main]: ql.Driver (SessionState.java:printError(960)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.spark.SparkTask
2015-10-05 20:42:07,300 INFO [main]: log.PerfLogger (PerfLogger.java:PerfLogEnd(148)) - </PERFLOG method=Driver.execute start=1444066848958 end=1444066927300 duration=78342 from=org.apache.hadoop.hive.ql.Driver>
...
at the end there is also the following exception:
2015-10-05 20:42:16,658 INFO [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(569)) - 15/10/05 20:42:16 INFO yarn.Client: Application report for application_1444066615793_0001 (state: ACCEPTED)
2015-10-05 20:42:17,337 WARN [main]: client.SparkClientImpl (SparkClientImpl.java:stop(154)) - Timed out shutting down remote driver, interrupting...
2015-10-05 20:42:17,337 WARN [Driver]: client.SparkClientImpl (SparkClientImpl.java:run(430)) - Waiting thread interrupted, killing child process.
2015-10-05 20:42:17,345 WARN [stderr-redir-1]: client.SparkClientImpl (SparkClientImpl.java:run(572)) - Error in redirector thread.
java.io.IOException: Stream closed
at java.io.BufferedInputStream.getBufIfOpen(BufferedInputStream.java:162)
at java.io.BufferedInputStream.read1(BufferedInputStream.java:272)
at java.io.BufferedInputStream.read(BufferedInputStream.java:334)
at sun.nio.cs.StreamDecoder.readBytes(StreamDecoder.java:283)
at sun.nio.cs.StreamDecoder.implRead(StreamDecoder.java:325)
at sun.nio.cs.StreamDecoder.read(StreamDecoder.java:177)
at java.io.InputStreamReader.read(InputStreamReader.java:184)
at java.io.BufferedReader.fill(BufferedReader.java:154)
at java.io.BufferedReader.readLine(BufferedReader.java:317)
at java.io.BufferedReader.readLine(BufferedReader.java:382)
at org.apache.hive.spark.client.SparkClientImpl$Redirector.run(SparkClientImpl.java:568)
at java.lang.Thread.run(Thread.java:745)
2015-10-05 20:42:17,371 INFO [Thread-15]: session.SparkSessionManagerImpl (SparkSessionManagerImpl.java:shutdown(146)) - Closing the session manager.
Faithfully wish anyone can give some advice, thanks a lot in advance