1
votes

using spark 1.2.0

Hi,

I want to save data from kafka stream to parquet. apply a schema to a JSON dataset when creating a table using jsonRDD. as described here https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html

The data is from Kafka and is coming through as a nested json.

Here is a basic example reading from a textfile for how Ive specific the schema for a non nested json.

    //contents of json
    hdfs@2db12:~$ hadoop fs -cat User/names.json
    {"name":"Michael", "age":10}
    {"name":"Andy", "age":30}
    {"name":"Justin"}

    //create RDD from json
    scala> val names= sc.textFile("hdfs://10.0.11.8:8020/user/hdfs/User/names.json")
    scala> names.collect().foreach(println)
    {"name":"Michael", "age":10}
    {"name":"Andy", "age":30}
    {"name":"Justin"}

    // specify schema
    val schemaString = "name age gender"
    val schema =
    StructType(
    schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))

    val peopleSchemaRDD = sqlContext.jsonRDD(names, schema)

   scala> peopleSchemaRDD.printSchema()
   root
   |-- name: string (nullable = true)
   |-- age: string (nullable = true)
   |-- gender: string (nullable = true)

   scala> peopleSchemaRDD.registerTempTable("people")

   scala> sqlContext.sql("SELECT name,age,gender FROM   people").collect().foreach(println)
   [Michael,10,null]
   [Andy,30,null]
   [Justin,null,null]

Is it possible to specify the schema for a nested json? for e.g .a json like this {"filename":"details","attributes":{"name":"Michael", "age":10}}

Many Thanks

2

2 Answers

3
votes

you can use sqlContext.jsonFile() if you have at least one json with gender field.

Or detailed define schema

val schema = StructType( 
  StructField("filename", StringType, true) ::
  StructField(
    "attributes",
    StructType(schemaString.split(" ").map(fieldName => 
      StructField(fieldName, StringType, true)
    ))
  ) :: Nil
)
3
votes

A java version .. the below link helped me

create nested dataframe programmatically with Spark

public static void main(String[] args) throws AnalysisException {
    String master = "local[*]";

    List<StructField> employeeFields = new ArrayList<>();
    employeeFields.add(DataTypes.createStructField("firstName", DataTypes.StringType, true));
    employeeFields.add(DataTypes.createStructField("lastName", DataTypes.StringType, true));
    employeeFields.add(DataTypes.createStructField("email", DataTypes.StringType, true));

    List<StructField> addressFields = new ArrayList<>();
    addressFields.add(DataTypes.createStructField("city", DataTypes.StringType, true));
    addressFields.add(DataTypes.createStructField("state", DataTypes.StringType, true));
    addressFields.add(DataTypes.createStructField("zip", DataTypes.StringType, true));
    ArrayType addressStruct = DataTypes.createArrayType( DataTypes.createStructType(addressFields));

    employeeFields.add(DataTypes.createStructField("addresses", addressStruct, true));
    StructType employeeSchema = DataTypes.createStructType(employeeFields);

    SparkSession sparkSession = SparkSession
            .builder().appName(SaveToCSV.class.getName())
            .master(master).getOrCreate();

    SparkContext context = sparkSession.sparkContext();
    context.setLogLevel("ERROR");

    SQLContext sqlCtx = sparkSession.sqlContext();

    Encoder<Employee> employeeEncoder = Encoders.bean(Employee.class);

    Dataset<Employee>  rowDataset = sparkSession.read()
            .option("inferSchema", "false")
            .schema(employeeSchema)
            .json("simple_employees.json").as(employeeEncoder);

    rowDataset.createOrReplaceTempView("employeeView");

    sqlCtx.sql("select * from employeeView").show();

    sparkSession.close();

}