I have a bunch of JSON files that I need to process. The structure (simplified for example) of the JSON files are (Schema of catalog when aws glue crawler ran on those json files):
root
|-- Meta: struct
| |-- DataModel: string
| |-- EventType: string
| |-- EventDateTime: string
|-- User: struct
| |-- Demographics: struct
| | |-- FirstName: string
| | |-- MiddleName: string
| | |-- LastName: string
I wish to merge or join User.Demographics.FirstName with User.Demographics.MiddleName and User.Demographics.LastName. So that the final processed JSON would look like:
root
|-- Meta: struct
| |-- DataModel: string
| |-- EventType: string
| |-- EventDateTime: string
|-- User: struct
| |-- Demographics: struct
| | |-- Name: string
I went through the AWS glue Developers Guide describing DynamicFrams and permitted options, but couldn't find any function that seems to be helpful.
As of now, I have following code that was auto-generated by AWS Glue, but doesn't seem to work.
import sys
from awsglue.transforms import *
from awsglue.utils import getResolvedOptions
from pyspark.context import SparkContext
from awsglue.context import GlueContext
from awsglue.job import Job
## @params: [JOB_NAME]
args = getResolvedOptions(sys.argv, ['JOB_NAME'])
sc = SparkContext()
glueContext = GlueContext(sc)
spark = glueContext.spark_session
job = Job(glueContext)
job.init(args['JOB_NAME'], args)
## @type: DataSource
## @args: [database = <db_name>, table_name = <table_name>, transformation_ctx = "datasource0"]
## @return: datasource0
## @inputs: []
datasource0 = glueContext.create_dynamic_frame.from_catalog(database = <db_name>, table_name = <table_name>, transformation_ctx = "datasource0")
## @type: ApplyMapping
## @args: <Currently using mapping function>
## @return: applymapping1
## @inputs: [frame = datasource0]
applymapping1 = ApplyMapping.apply(frame = datasource0, mappings = [<Mapping Tuples>], transformation_ctx = "applymapping1")
## @type: DataSink
## @args: [connection_type = "s3", connection_options = {"path": <S3 Destination Path>}, format = "json", transformation_ctx = "datasink2"]
## @return: datasink2
## @inputs: [frame = applymapping1]
# datasink2 = glueContext.write_dynamic_frame.from_options(frame = applymapping1, connection_type = "s3", connection_options = {"path": <S3 Destination Path>}, format = "json", transformation_ctx = "datasink2")
job.commit()
AWS Glue also supports PySpark library of python, if there's a way to implement this using PySpark, please share some details/links where I can refer to it.
P.S. I have some experience in writing scripts in Python but don't have any idea about python's PySpark library or any other ETL related code/scripting.
Thank you.