1
votes

Wrote a code to read text file via spark...works fine in local...but generates error when running in HDInsight -> reading text file from Blob

org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0 (TID 5, wn1-hchdin.bpqkkmavxs0ehkfnaruw4ed03d.dx.internal.cloudapp.net, executor 2): java.lang.AbstractMethodError: com.journaldev.sparkdemo.WordCounter$$Lambda$17/790636414.call(Ljava/lang/Object;)Ljava/util/Iterator; at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125) at org.apache.spark.api.java.JavaRDDLike$$anonfun$fn$1$1.apply(JavaRDDLike.scala:125) at scala.collection.Iterator$$anon$12.nextCur(Iterator.scala:434) at scala.collection.Iterator$$anon$12.hasNext(Iterator.scala:440) at scala.collection.Iterator$class.foreach(Iterator.scala:893) at scala.collection.AbstractIterator.foreach(Iterator.scala:1336) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:927) at org.apache.spark.rdd.RDD$$anonfun$foreach$1$$anonfun$apply$28.apply(RDD.scala:927) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074) at org.apache.spark.SparkContext$$anonfun$runJob$5.apply(SparkContext.scala:2074) at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:87) at org.apache.spark.scheduler.Task.run(Task.scala:109) at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:345) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)

Here is My code

    JavaSparkContext ct = new JavaSparkContext();
        Configuration config = ct.hadoopConfiguration();
        config.set("fs.azure", "org.apache.hadoop.fs.azure.NativeAzureFileSystem");
        config.set("org.apache.hadoop.fs.azure.SimpleKeyProvider", "<<key>>");

        JavaRDD<String> inputFile = ct.textFile("wasb://<<container-nam>>@<<account>>.blob.core.windows.net/directory/file.txt");

        JavaRDD<String> wordsFromFile = inputFile.flatMap(content -> Arrays.asList(content.split(" ")));

        wordsFromFile.foreach(cc ->{System.out.println("p :"+cc.toString());});
1
Please see if the below thread helps stackoverflow.com/questions/37763472/…bharathn-msft

1 Answers

0
votes

For Spark running on local, there is an official blog which introduces how to access Azure Blob Storage from Spark. The key is that you need to configure Azure Storage account as HDFS-compatible storage in core-site.xml file and add two jars hadoop-azure & azure-storage to your classpath for accessing HDFS via the protocol wasb[s]. You can refer to the official tutorial to know HDFS-compatible storage with wasb, and the blog about configuration for HDInsight more details.

For Spark running on Azure, the difference is just only access HDFS with wasb, the other preparations has been done by Azure when creating HDInsight cluster with Spark. The method for listing files is listFiles or wholeTextFiles of SparkContext.

Hope it helps.