1
votes

I know that there is some questions about that, but there was not enough information to fix my problem.

I try to run a job in yarn-client mode, from my Eclipse project. I have a hadoop cluster with 2 nodes (one of them is currently off). I tried to run it on cluster mode (with spark-submit) and it's work. I tried to run it local from eclipse project with:

I am trying to make a Spark Context like this:

        SparkConf conf = new SparkConf().setAppName("AnomalyDetection-BuildModel").setMaster("local[*]");

and it's works.

But when I try to run it with "yarn-client":

 SparkConf conf = new SparkConf().setAppName("AnomalyDetection-BuildModel").setMaster("yarn-client").set("driver-memory", "556m").set("executor-memory", "556m").set("executor-cores", "1").set("queue", "default");

I recived an error:

cannot assign instance of scala.collection.immutable.List$SerializationProxy to field org.apache.spark.rdd.RDD.org$apache$spark$rdd$RDD$$dependencies_ of type scala.collection.Seq in instance of org.apache.spark.rdd.MapPartitionsRDD

Another problem is that I don't know exactly how the dependency and the compatibility work in this case and why with local[*] I don't receive any errors.

This is my pom.xml file:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
    xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>
    <groupId>buildModelTest</groupId>
    <artifactId>buildModelTest</artifactId>
    <version>1</version>
    <properties>
        <encoding>UTF-8</encoding>
        <scala.version>2.11.8</scala.version>
        <spark.version>2.1.0</spark.version>
        <hadoop.version>2.7.0</hadoop.version>
    </properties>
    <dependencies>
        <dependency>
            <groupId>junit</groupId>
            <artifactId>junit</artifactId>
            <version>3.8.1</version>
            <scope>test</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-core_2.10</artifactId>
            <version>2.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-mllib_2.10</artifactId>
            <version>2.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.scala-lang</groupId>
            <artifactId>scala-reflect</artifactId>
            <version>2.11.8</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.apache.spark</groupId>
            <artifactId>spark-yarn_2.10</artifactId>
            <version>2.1.0</version>
            <scope>provided</scope>
        </dependency>
        <dependency>
            <groupId>org.scalatest</groupId>
            <artifactId>scalatest_2.11</artifactId>
            <version>3.0.0</version>
            <scope>provided</scope>
        </dependency>
    </dependencies>
    <build>
        <plugins>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-jar-plugin</artifactId>
                <configuration>
                    <archive>
                        <manifest>
                            <mainClass>buildModelTest.Main</mainClass>
                      </manifest>
                    </archive>
                </configuration>
            </plugin>
            <plugin>
                <groupId>org.apache.maven.plugins</groupId>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>3.3</version>
            </plugin>
        </plugins>
    </build>
</project>

In the eclipse project I have added the config files for hadoop, and on build configuration the environment variables for SCALA_HOME, SPARK_HOME, HADOOP_CONF_DIR. Regarding SPARK , I have spark-2.1.0-bin-hadoop2.7, and SCALA 2.11.8. On my Java project I added all the jars from Spark/bin.

So do you guys have any idea why this is not working with "client-yarn", is there a dependency problem? if yes, what is different between normal and yarn-client in term of dependency? Maven download for me some of the jar that I add from Spark/bin, so I guess some of them are redundant.

EDIT

The sparkContext is initialized correctly (I guess). The error is thrown when I call .rrd() method:

       JavaRDD<Vector> parsedTrainingData = data.map(new Function<String, Vector>() {

                    private static final long serialVersionUID = 1L;

                    public Vector call(String s) {
                        String[] sarray = s.split(" ");
                        double[] values = new double[sarray.length];
                        for (int i = 0; i < sarray.length; i++) {
                            values[i] = Double.parseDouble(sarray[i]);
                        }
                        return Vectors.dense(values);
                    }
                });
                parsedTrainingData.cache();

                // Cluster the data into two classes using KMeans
                KMeansModel clusters = KMeans.train(parsedTrainingData.rdd(), numClusters, numIterations);
1

1 Answers

0
votes

From you code, it looks like you are trying to run your Spark application on YARN cluster.

SparkConf conf = new SparkConf().setAppName("AnomalyDetection-BuildModel").setMaster("yarn-client").set("driver-memory", "556m").set("executor-memory", "556m").set("executor-cores", "1").set("queue", "default");

Here, setMaster("yarn-client") is a wrong parameter to master.

When you set master to local[*], your spark application runs inside a single JVM in your machine.

To submit a spark application to a running YARN cluster, set setMaster("yarn") and also optionally set deploy-mode property to client or cluster.

Please refer this for more details on these parameters.

Also, if you want to submit your application from the code instead of from a command line, then refer this post.