1
votes

I'm using com.databricks.spark.avro. When I run it from spark-shell like so: spark-shell --jar spark-avro_2.11-4.0.0.jar, I am able to read the file by doing this:

import org.apache.spark.sql.SQLContext
val sqlContext = new SQLContext(sc)
val avroInput = sqlContext.read.format("com.databricks.spark.avro").load(inputPath)
avroInput.write.format("com.databricks.spark.avro").save(outputPath)

But if I try to do the same thing from my project using sbt clean run, I get:

java.lang.ClassNotFoundException: Failed to find data source: org.apache.spark.sql.avro.AvroFileFormat. Please find packages at http://spark.apache.org/third-party-projects.html
[info]   at org.apache.spark.sql.execution.datasources.DataSource$.lookupDataSource(DataSource.scala:657)
[info]   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:194)
[info]   at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:178)
[info]   at com.databricks.spark.avro.package$AvroDataFrameReader$$anonfun$avro$2.apply(package.scala:34)

"com.databricks" %% "spark-avro" % "4.0.0" is listed in my Dependencies and it's in my external libraries. Is there another dependency I'm missing?

3

3 Answers

4
votes

Below are the dependencies you would need while using Avro in Spark. based on your need, use one of the following.

Maven dependencies.

<dependency>
    <groupId>org.apache.spark</groupId>
    <artifactId>spark-avro_2.11</artifactId>
    <version>2.4.0</version>
</dependency>

spark-submit

While using spark-submit, provide spark-avro_2.12 and its dependencies directly using --packages, such as,

./bin/spark-submit --packages org.apache.spark:spark-avro_2.12:2.4.4

spark-shell

While working with spark-shell, you can also use --packages to add spark-avro_2.12 and its dependencies directly,

./bin/spark-shell --packages org.apache.spark:spark-avro_2.12:2.4.4

Change the spark-avro version according to the version you are using.

Refer Using Avro Data Files From Spark SQL 2.4.x and later Happy Learning !!

2
votes

Turns out I didn't have to use the databricks jar. I added apache spark avro to my dependencies:

"org.apache.spark"             %% "spark-avro"           % "2.4.0"

And I was able to read my avro file into a DataFrame:

val avroInput = sparkSession.read
  .format("avro")
  .load("/pathtoFile/avroFile.avro")
0
votes

Take a look at https://spark.apache.org/docs/latest/sql-data-sources-avro.html#deploying to see how deploy Avro jar along with your application jar through spark-submit command. Specifically, you need to use --packages option. This also works for spark-shell.