using spark 1.2.0
Hi,
I want to save data from kafka stream to parquet. apply a schema to a JSON dataset when creating a table using jsonRDD. as described here https://databricks.com/blog/2015/02/02/an-introduction-to-json-support-in-spark-sql.html
The data is from Kafka and is coming through as a nested json.
Here is a basic example reading from a textfile for how Ive specific the schema for a non nested json.
//contents of json
hdfs@2db12:~$ hadoop fs -cat User/names.json
{"name":"Michael", "age":10}
{"name":"Andy", "age":30}
{"name":"Justin"}
//create RDD from json
scala> val names= sc.textFile("hdfs://10.0.11.8:8020/user/hdfs/User/names.json")
scala> names.collect().foreach(println)
{"name":"Michael", "age":10}
{"name":"Andy", "age":30}
{"name":"Justin"}
// specify schema
val schemaString = "name age gender"
val schema =
StructType(
schemaString.split(" ").map(fieldName => StructField(fieldName, StringType, true)))
val peopleSchemaRDD = sqlContext.jsonRDD(names, schema)
scala> peopleSchemaRDD.printSchema()
root
|-- name: string (nullable = true)
|-- age: string (nullable = true)
|-- gender: string (nullable = true)
scala> peopleSchemaRDD.registerTempTable("people")
scala> sqlContext.sql("SELECT name,age,gender FROM people").collect().foreach(println)
[Michael,10,null]
[Andy,30,null]
[Justin,null,null]
Is it possible to specify the schema for a nested json? for e.g .a json like this {"filename":"details","attributes":{"name":"Michael", "age":10}}
Many Thanks